Run report is a set of files Mongoose produces in a directory <MONGOOSE_DIR>/log/<RUN_ID>. Starting with Mongoose 0.8, all the key log files (items.csv, perf.avg.csv, perf.trace.csv, and perf.sum.csv) are produced in pure CSV format. You can use any mature tool that supports CSV format to open and process report components.
<MONGOOSE_DIR>/log/<RUN_ID>
items.csv
perf.avg.csv
perf.trace.csv
perf.sum.csv
As an example, suppose we had a Mongoose run that produced 10 data items of random size and we would like to calculate total size of the generated content. You can easily get the result by opening items.csv in any spreadsheet editor and selecting the third column with data item sizes. The total size can be found on a status bar as a Sum value.
Example scenarios location: scenario/write/*.json
Mongoose creates the items by default (if load type is not specified). So it's enough just to run the default scenario:
java -jar mongoose-<VERSION>/mongoose.jar
java -Ditem.data.size=100 ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Ditem.data.size=4KB-16KB ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Ditem.data.size=0-100MB,2.5 ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
In order to enable the update mode for the Write load type it's neccessary to specify the random byte ranges count.
Example scenarios location: scenario/partial/update-multiple-random-ranges.json
The example below performs the data items update from the specified source file with 10 random byte ranges per request.
java -Dload.type=update -Ditem.data.ranges=10 -Ditem.src.file=<PATH_TO_ITEM_LIST_CSV_FILE> ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
In order to enable the append mode for the Write load type it's neccessary to specify the fixed byte range with start offset equal to the size of the data items which should be updated.
Example scenarios location: scenario/partial/append.json
The example below performs the data items append from the size of 4KB to the size of 8KB. Note that the source data items should have the size of 4KB.
java -Dload.type=update -Ditem.data.ranges=4096-8192 -Ditem.src.file=<PATH_TO_ITEM_LIST_CSV_FILE> ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/copy/*.json
The example below performs the items copying from the source container to the target container: java [-Ditem.dst.container=<TARGET_CONTAINER>] -Ditem.src.container=<SOURCE_CONTAINER> [-Ditem.src.file=<PATH_TO_ITEMS_LIST_CSV_FILE>] ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java [-Ditem.dst.container=<TARGET_CONTAINER>] -Ditem.src.container=<SOURCE_CONTAINER> [-Ditem.src.file=<PATH_TO_ITEMS_LIST_CSV_FILE>] ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
See Mongoose Copy Mode functional specification for details
In order to use Read load type it's neccessary to set "read" value to the "load.type" configuration parameter.
Example scenarios location: scenario/read/*.json
java -Dload.type=read -Ditem.data.verify=false -Ditem.dst.container=<CONTAINER_WITH_ITEMS> ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
In order to use Read load type it's neccessary to set "delete" value to "load.type" configuration parameter.
Example scenarios location: scenario/delete/*.json
java -Dload.type=delete -Ditem.dst.container=<CONTAINER_WITH_ITEMS> ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/limit/*.json
It's possible to limit the load jobs by any combination of 4 possible ways.
Example scenarios location: scenario/limit/by-count.json
Load with no more than N items:
java -Dload.limit.count=<N> ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Dload.limit.time=1h ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/limit/by-rate.json
Perform a load job with the rate of no more than 1234.5 items (and operations) per second.
java [-Ditem.data.size=0] -Dload.limit.rate=1234.5 [-Dload.threads=1000] ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Dload.limit.size=100GB ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -jar mongoose-<VERSION>/mongoose.jar -f <PATH_TO_SCENARIO_FILE>
Example scenarios location: scenario/distributed/*.json
java -jar mongoose-<VERSION>/mongoose.jar server
java -jar mongoose-<VERSION>/mongoose.jar client -f <PATH_TO_SCENARIO_FILE>
java -jar mongoose-<VERSION>/mongoose.jar wsmock
java -jar mongoose-<VERSION>/mongoose.jar webui
In order to perform a load with container items it's neccessary to set "container" value to the "item.type" configuration parameter.
Example scenarios location: scenario/container/*.json
java -Ditem.type=container ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/container/read-containers-with-items.json
java -Ditem.type=container -Dload.type=read -Ditem.src.file=<PATH_TO_ITEMS_LIST_CSV_FILE> ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Note that the total byte count and bytes per second (BW) metrics are calculated while reading the containers with data items. The size of the container is calculated as a sum of the included data items sizes.
java -Ditem.type=container -Dload.type=delete -Ditem.src.file=<PATH_TO_ITEMS_LIST_CSV_FILE> ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
The "data" item type is used by default.
Example scenarios location: scenario/ecs/write-s3.json
java -Dauth.id=<USER_ID> -Dauth.secret=<SECRET> [-Ditem.dst.container=<TARGET_BUCKET>] -Dstorage.addrs=10.20.30.40 -Dstorage.port=8080 -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/ecs/write-atmos.json
java -Dauth.id=<USER_ID> [-Dauth.token=<SUBTENANT>] -Dauth.secret=<SECRET> -Dauth.secret=WQmcQh5UYRAWYqJGCVEueihGBZ7h6nI2vHHwYmPg -Dstorage.addrs=10.20.30.40 -Dstorage.port=8080 -Dstorage.http.api=atmos -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/ecs/write-swift.json
java -Dauth.id=<USER_ID> [-Dauth.token=<TOKEN>] -Dauth.secret=<SECRET> [-Ditem.dst.container=<TARGET_CONTAINER>] -Dstorage.addrs=10.20.30.40 -Dstorage.port=8080 -Dstorage.http.api=swift -Dstorage.http.namespace=<NS> -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Dauth.id=wuser1@sanity.local -Dauth.secret=<SECRET> [-Ditem.dst.container=<TARGET_BUCKET>] -Dstorage.addrs=10.20.30.40,10.20.30.41,10.20.30.42 -Dstorage.port=9020 -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Dauth.id=wuser1@sanity.local [-Dauth.token=<SUBTENANT>] -Dauth.secret=<SECRET> -Dstorage.addrs=10.20.30.40,10.20.30.41,10.20.30.42 -Dstorage.port=9022 -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Dauth.id=wuser1@sanity.local [-Dauth.token=<TOKEN>] -Dauth.secret=<SECRET> [-Ditem.dst.container=<TARGET_CONTAINER>] -Dstorage.addrs=10.20.30.40,10.20.30.41,10.20.30.42 -Dstorage.port=9024 -Dstorage.http.api=swift -Dstorage.http.namespace=s3 -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
In order to use Filesystem load engine it's neccessary to set "fs" value to the "storage.type" configuration parameter.
Example scenarios location: scenario/fs/*.json
Example scenarios location: scenario/fs/write-to-custom-dir.json
java -Ditem.dst.container=<PATH_TO_TARGET_DIR> -Dstorage.type=fs -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/fs/read-from-custom-dir.json
java -Ditem.dst.container=<PATH_TO_TARGET_DIR> [<ITEM_SRC_FILE_OR_CONTAINER>] -Dload.type=read -Dstorage.type=fs -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/fs/overwrite-circularly.json
java -Dload.type=update -Ditem.dst.container=<PATH_TO_TARGET_DIR> [<ITEM_SRC_FILE_OR_CONTAINER>] -Dload.circular=true -Dstorage.type=fs -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
An user may use a custom file as the content source for the data generation and verification. This custom file path should be specified as the "item.data.content.file" configuration parameter. There are two predefined content source files: conf/content/textexample and conf/content/zerobytes.
Example scenarios location: scenario/content/*.json
The same content source should be used for the data items writing and subsequent reading in order to pass data verification
java -Ditem.data.content.file=mongoose-<VERSION>/conf/content/textexample ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Ditem.data.content.file=mongoose-<VERSION>/conf/content/zerobytes ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
In order to load with the fixed set of the items "infinitely" (each items is being written/read again and again) an user should set the configuration parameter "load.circular" to true.
Example scenarios location: scenario/circular/*.json
java -Dload.type=read -Ditem.data.verify=false -Ditem.dst.container=<CONTAINER_WITH_ITEMS> [<ITEM_SRC_FILE_OR_CONTAINER>] -Dload.circular=true ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Ditem.data.verify=false -Ditem.dst.container=<CONTAINER_WITH_ITEMS> [<ITEM_SRC_FILE_OR_CONTAINER>] -Ditem.data.ranges=1 -Dload.circular=true ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
{ "type" : "load", "config" : { // here are the configuration hierarchy } }
{ "type" : "precondition", "config" : { // here are the configuration hierarchy } }
{ "type" : "sequential", "jobs" : [ { "type" : "", ... }, { "type" : "", ... } ... ] }
{ "type" : "parallel", "jobs" : [ { "type" : "", ... }, { "type" : "", ... } ... ] }
{ "type" : "sequential", "jobs" : [ { "type" : "precondition", "config" : { "item" : { "dst" : { "file" : } } ... } }, { "type" : "", "config" : { "item" : { "src" : { "file" : } } ... } } ] }
{ "type" : "sequential", "config" { // the configuration specified here will be inherited by the container elements }, "jobs" : [ { "type" : "load", ... } ... ] }
{ "type" : "command", "value" : "killall -9 java", }
{ "type" : "command", "value" : "find /", "blocking" : false }
{ "type" : "sequential" "config" : { // shared configuration values inherited by the children jobs }, "jobs" : [ { "type" : "load", "config" : { // specific configuration for the 1st load job } }, { "type" : "command", "value" : "sleep 5s" }, { "type" : "load", "config" : { // specific configuration for the 2nd load job } } ] }
{ "type": "rampup", "config" : { "item" : { "data" : { "size" : [ 0, "1KB", "1MB", "1GB" ] } }, "load" : { "limit" : { "time" : "10m", "count" : 10000000 }, "threads": [ 1, 10, 100, 1000 ], "type" : [ "write", "read", "delete" ] } } }
scenario/scenario-schema.json
{ "type" : "for", "value" : "threads", "in" : [ 1, 10, 100, 1000, 10000, 100000 ], "config" : { "load" : { "threads" : "${threads}" } }, "jobs" : [ { "type" : "load" } ] }
{ "type" : "for", "jobs" : [ { "type" : "load" } ] }
{ "type" : "for", "value" : 10, "jobs" : [ { "type" : "load" } ] }
{ "type" : "for", "value" : "i", "in" : "2.71828182846-3.1415926,0.1", "jobs" : [ { "type" : "command" "value" : "echo ${i}" } ] }
Example scenarios location: scenario/dynamic/*.json
Example scenarios location: scenario/dynamic/custom-http-headers.json
java -Dstorage.http.headers.myOwnHeaderName=MyOwnHeaderValue -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/dynamic/custom-http-headers-with-dynamic-values.json
java -Dstorage.http.headers.myOwnHeaderName=MyOwnHeaderValue\ %d[0-1000]\ %f{###.##}[-2--1]\ %D{yyyy-MM-dd'T'HH:mm:ssZ}[1970/01/01-2016/01/01] -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/dynamic/write-to-variable-dir.json
java -Ditem.dst.container=<PATH_TO_TARGET_DIR>/%p\{16\;2\} -Dstorage.type=fs ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
Example scenarios location: scenario/naming/*.json
java -Ditem.naming.type=asc ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Ditem.naming.type=desc ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Ditem.naming.radix=10 ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
java -Ditem.naming.prefix=item_ ... -jar mongoose-<VERSION>/mongoose.jar -f mongoose-<VERSION>/scenario/default.json
The feature is available since v2.1.0
Example scenarios location: scenario/ssl/*.json java -jar mongoose-<VERSION>/mongoose.jar -f scenario/ssl/write-single-item.json or java -Dnetwork.ssl=true -Dstorage.port=9021 ... -jar mongoose-<VERSION>/mongoose.jar
java -jar mongoose-<VERSION>/mongoose.jar -f scenario/ssl/write-single-item.json
java -Dnetwork.ssl=true -Dstorage.port=9021 ... -jar mongoose-<VERSION>/mongoose.jar
Go to the file conf/logging.json using the text editor, then go to the line ~#45 in the attribute "pattern" value remove the leading "%highlight{" and trailing "}" characters