peerplays_migrated/tests/elasticsearch/main.cpp
pbattu123 be14592ea8
Merge Plugins and graphene update changes from beatrice TESTNET to master (#317)
* increase delay for node connection

* remove cache from cli get_account

* add cli tests framework

* Adjust newly merged code to new API

* Merged changes from Bitshares PR 1036

* GRPH-76 - Short-cut long sequences of missed blocks

Fixes database::update_global_dynamic_data to speed up counting missed blocks.
(This also fixes a minor issue with counting - the previous algorithm would skip missed blocks for the witness who signed the first block after the gap.)

* Improved resilience of block database against corruption

* Moved reindex logic into database / chain_database, make use of additional blocks in block_database

Fixed tests wrt db.open

* Enable undo + fork database for final blocks in a replay

Dont remove blocks from block db when popping blocks, handle edge case in replay wrt fork_db, adapted unit tests

* Log starting block number of replay

* Prevent unsigned integer underflow

* Fixed lock detection

* Dont leave _data_dir empty if db is locked

* Writing the object_database is now almost atomic

* Improved consistency check for block_log

* Cut back block_log index file if inconsistent

* Fixed undo_database

* Added test case for broken merge on empty undo_db

* exclude second undo_db.enable() call in some cases

* Add missing change

* change bitshares to core in message

* Merge pull request #938 from bitshares/fix-block-storing

Store correct block ID when switching forks

* Fixed integer overflow issue

* Fix for for history ID mismatch ( Bitshares PR #875 )

* Update the FC submodule with the changes for GRPH-4

* Merged Bitshares PR #1462 and compilation fixes

* Support/gitlab (#123)

* Updated gitlab process

* Fix undefined references in cli test

* Updated GitLab CI

* Fix #436 object_database created outside of witness data directory

* supplement more comments on database::_opened variable

* prevent segfault when destructing application obj

* Fixed test failures and compilation issue

* minor performance improvement

* Added comment

* Fix compilation in debug mode

* Fixed duplicate ops returned from get_account_history

* Fixed account_history_pagination test

* Removed unrelated comment

* Update to fixed version of fc

* Skip auth check when pushing self-generated blocks

* Extract public keys before pushing a transaction

* Dereference chain_database shared_ptr

* Updated transaction::signees to mutable

and
* updated get_signature_keys() to return a const reference,
* get_signature_keys() will update signees on first call,
* modified test cases and wallet.cpp accordingly,
* no longer construct a new signed_transaction object before pushing

* Added get_asset_count API

* No longer extract public keys before pushing a trx

and removed unused new added constructor and _get_signature_keys() function from signed_transaction struct

* changes to withdraw_vesting feature(for both cdd and GPOS)

* Comments update

* update to GPOS hardfork ref

* Remove leftover comment from merge

* fix for get_vesting_balance API call

* braces update

* Allow sufficient space for new undo_session

* Throw for deep nesting

* node.cpp: Check the attacker/buggy client before updating items ids

The peer is an attacker or buggy, which means the item_hashes_received is
not correct.

Move the check before updating items ids to save some time in this case.

* Create .gitlab-ci.yml

* Added cli_test to CI

* fixing build errors (#150)

* fixing build errors

vest type correction

* fixing build errors

vest type correction

* fixes 

new Dockerfile

* vesting_balance_type correction

vesting_balance_type changed to normal

* gcc5 support to Dockerfile

gcc5 support to Dockerfile

* use random port numbers in app_test (#154)

* Changes to compiple with GCC 7(Ubuntu 18.04)

* proposal fail_reason bug fixed (#157)

* Added Sonarcloud code_quality to CI (#159)

* Added sonarcloud analysis (#158)

* changes to have separate methods and single withdrawl fee for multiple vest objects

* 163-fix, Return only non-zero vesting balances

* Support/gitlab develop (#168)

* Added code_quality to CI

* Update .gitlab-ci.yml

* Point to PBSA/peerplays-fc commit f13d063 (#167)

* [GRPH-3] Additional cli tests (#155)

* Additional cli tests

* Compatible with latest fc changes

* Fixed Spacing issues

* [GRPH-106] Added voting tests (#136)

* Added more voting tests

* Added additional option

* Adjust p2p log level (#180)

* Added submodule sync to peerplays compile process

* merge gpos to develop (#186)

* issue - 154: Don't allow to vote when vesting balance is 0

* changes to withdraw_vesting feature(for both cdd and GPOS)

* Comments update

* update to GPOS hardfork ref

* fix for get_vesting_balance API call

* braces update

* Create .gitlab-ci.yml

* fixing build errors (#150)

* fixing build errors

vest type correction

* fixing build errors

vest type correction

* fixes 

new Dockerfile

* vesting_balance_type correction

vesting_balance_type changed to normal

* gcc5 support to Dockerfile

gcc5 support to Dockerfile

* Changes to compiple with GCC 7(Ubuntu 18.04)

* changes to have separate methods and single withdrawl fee for multiple vest objects

* 163-fix, Return only non-zero vesting balances

* Revert "Revert "GPOS protocol""

This reverts commit 67616417b7.

* add new line needed to gpos hardfork file

* comment temporally cli_vote_for_2_witnesses until refactor or delete

* fix gpos tests

* fix gitlab-ci conflict

* Fixed few error messages

* error message corrections at other places

* Updated FC repository to peerplays-network/peerplays-fc (#189)

Point to fc commit hash 6096e94 [latest-fc branch]

* Project name update in Doxyfile (#146)

* changes to allow user to vote in each sub-period

* Fixed GPOS vesting factor issue when proxy is set

* Added unit test for proxy voting

* Review changes

* changes to update last voting time

* resolve merge conflict

* unit test changes and also separated GPOS test suite

* delete unused variables

* removed witness check

* eliminate time gap between two consecutive vesting periods

* deleted GPOS specific test suite and updated gpos tests

* updated GPOS hf

* Fixed dividend distribution issue and added test case

* fix flag

* clean newlines gpos_tests

* adapt gpos_tests to changed flag

* Fix to roll in GPOS rules, carry votes from 6th sub-period

* check was already modified

* comments updated

* updated comments to the benefit of reviewer

* Added token symbol name in error messages

* Added token symbol name in error messages (#204)

* case 1: Fixed last voting time issue

* get_account bug fixed

* Fixed flag issue

* Fixed spelling issue

* remove non needed gcc5 changes to dockerfile

* GRPH134- High CPU Issue, websocket changes (#213)

* update submodule branch to refer to the latest commit on latest-fc branch (#214)

* Improve account maintenance performance (#130)

* Improve account maintenance performance

* merge fixes

* Fixed merge issue

* Fixed indentations and extra ';'

* Update CI for syncing gitmodules (#216)

* Added logging for the old update_expired_feeds bug

The old bug is https://github.com/cryptonomex/graphene/issues/615 .

Due to the bug, `update_median_feeds()` and `check_call_orders()`
will be called when a feed is not actually expired, normally this
should not affect consensus since calling them should not change
any data in the state.

However, the logging indicates that `check_call_orders()` did
change some data under certain circumstances, specifically, when
multiple limit order matching issue (#453) occurred at same block.
* https://github.com/bitshares/bitshares-core/issues/453

* Minor performance improvement for price::is_null()

* Use static refs in db_getter for immutable objects

* Minor performance improvement for db_maint

* Minor code updates for asset_evaluator.cpp

* changed an `assert()` to `FC_ASSERT()`
* replaced one `db.get(asset_id_type())` with `db.get_core_asset()`
* capture only required variables for lambda

* Improve update_expired_feeds performance #1093

* Change static refs to member pointers of db class

* Added getter for witness schedule object

* Added getter for core dynamic data object

* Use getters

* Removed unused variable

* Add comments for update_expired_feeds in db_block

* Minor refactory asset_create_evaluator::do_apply()

* Added FC_ASSERT for dynamic data id of core asset

* Added header inclusions in db_management.cpp

* fix global objects usage during replay

* Logging config parsing issue

* added new files

* compilation fix

* Simplified code in database::pay_workers()

* issue with withdrawl

* Added unit test for empty account history

* set extensions default values

* Update GPOS hardfork date and don't allow GPOS features before hardfork time

* refer to latest commit of latest-fc branch (#224)

* account name or id support in all database api

* asset id or name support in all asset APIs

* Fixed compilation issues

* Fixed alignment issues

* Externalized some API templates

* Externalize serialization of blocks, tx, ops

* Externalized db objects

* Externalized genesis serialization

* Externalized serialization in protocol library

* Undo superfluous change

* remove default value for extension parameter

* fix compilation issues

* GRPH-46-Quit_command_cliwallet

* removed multiple function definition

* Fixed chainparameter update proposal issue

* Move GPOS withdraw logic to have single transaction(also single fee) and update API

* Added log for authorization failure of proposal operations

* Votes consideration on GPOS activation

* bump fc version

* fix gpos tests

* Bump fc version

* Updated gpos/voting_tests

* Fixed withdraw vesting bug

* Added unit test

* Update hardfork date for TESTNET, sync fc module and update logs

* avoid wlog as it filling up space

* Beatrice hot fix(sync issue fix)

* gpos tests fix

* Set hardfork date to Jan5th on TESTNET

* Merge Elasticplugin, snapshot plugin and graphene updates to beatrice (#304)

* check witness signature before adding block to fork db

* Replace verify_no_send_in_progress with no_parallel_execution_guard

* fixed cli_wallet log issue

* Port plugin sanitization code

* avoid directly overwriting wallet file

* Implemented "plugins" config variable

* allow plugin to have descriptions

* Merge pull request #444 from oxarbitrage/elasticsearch

Elasticsearch plugin

* Merge pull request #500 from oxarbitrage/elasticsearch-extras

es_objects plugin

* Merge pull request #873 from pmconrad/585_fix_history_ids

Fix history ids

* Merge pull request #1201 from oxarbitrage/elasticsearch_tests2

Elasticsearch refactor

* Merge pull request #1271 from oxarbitrage/es_objects

refine es_objects plugin

* Merge pull request #1429 from oxarbitrage/es_objects_templates

Add an adaptor to es_objects and template function to reduce code

* Merge pull request #1458 from oxarbitrage/issue1455

add option elasticsearch-start-es-after-block to es plugin

* Merge pull request #1541 from oxarbitrage/es_objects_start_after_block

add es-objects-start-es-after-block option

* explicitly cleanup external library facilities

* Merge pull request #1717 from oxarbitrage/issue1652

add genesis data to es_objects

* Merge pull request #1073 from xiangxn/merge-impacted

merge impacted into db_notify

* Merge pull request #1725 from oxarbitrage/issue1682

elasticsearch history api #1682

* change ES index prefixes to Peerplays-specific

* sync develop with beatrice

* fix the data writing to ES during sync issues

* fix CLI tests

* brought updates from mainnet branch (#285)

* Fix unit test failures (#289)

* fixed unit test failures from the recent merges

* fixed unit test failures from the recent merges

* enable snapshot plugin (#288)

* sync fc branch(build optimization changes)

* update to es plugin

* fix verify witness signature method (#295)

* enable mandatory plugins to have smooth transition for next release

* updated tests to keep in-line with plugin changes

Co-authored-by: Sandip Patel <sandip@knackroot.com>
Co-authored-by: Peter Conrad <conrad@quisquis.de>
Co-authored-by: Alfredo <oxarbitrage@gmail.com>
Co-authored-by: Abit <abitmore@users.noreply.github.com>
Co-authored-by: crypto-ape <43807588+crypto-ape@users.noreply.github.com>
Co-authored-by: gladcow <s.gladkov@pbsa.info>

* sync latest fc commit on beatrice

* sweeps winner_ticket_id changes

Co-authored-by: Bobinson K B <bobinson@gmail.com>
Co-authored-by: gladcow <s.gladkov@pbsa.info>
Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com>
Co-authored-by: Miha Čančula <miha@noughmad.eu>
Co-authored-by: Ronak Patel <r.patel@pbsa.info>
Co-authored-by: Srdjan Obucina <obucinac@gmail.com>
Co-authored-by: Peter Conrad <conrad@quisquis.de>
Co-authored-by: Peter Conrad <cyrano@quisquis.de>
Co-authored-by: Abit <abitmore@users.noreply.github.com>
Co-authored-by: Roshan Syed <r.syed@pbsa.info>
Co-authored-by: cifer <maintianyu@gmail.com>
Co-authored-by: John Jones <jmjatlanta@gmail.com>
Co-authored-by: Sandip Patel <sandip@knackroot.com>
Co-authored-by: Wei Yang <richard.weiyang@gmail.com>
Co-authored-by: gladcow <jahr@yandex.ru>
Co-authored-by: satyakoneru <satyakoneru.iiith@gmail.com>
Co-authored-by: crypto-ape <43807588+crypto-ape@users.noreply.github.com>
2020-04-15 20:34:15 +05:30

535 lines
24 KiB
C++

/*
* Copyright (c) 2018 oxarbitrage and contributors.
*
* The MIT License
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <graphene/app/api.hpp>
#include <graphene/utilities/tempdir.hpp>
#include <fc/crypto/digest.hpp>
#include <graphene/utilities/elasticsearch.hpp>
#include <graphene/elasticsearch/elasticsearch_plugin.hpp>
#include "../common/database_fixture.hpp"
#define BOOST_TEST_MODULE Elastic Search Database Tests
#include <boost/test/included/unit_test.hpp>
using namespace graphene::chain;
using namespace graphene::chain::test;
using namespace graphene::app;
BOOST_FIXTURE_TEST_SUITE( elasticsearch_tests, database_fixture )
BOOST_AUTO_TEST_CASE(elasticsearch_account_history) {
try {
CURL *curl; // curl handler
curl = curl_easy_init();
graphene::utilities::ES es;
es.curl = curl;
es.elasticsearch_url = "http://localhost:9200/";
es.index_prefix = "peerplays-";
//es.auth = "elastic:changeme";
// delete all first
auto delete_account_history = graphene::utilities::deleteAll(es);
fc::usleep(fc::milliseconds(1000)); // this is because index.refresh_interval, nothing to worry
if(delete_account_history) { // all records deleted
//account_id_type() do 3 ops
create_bitasset("USD", account_id_type());
auto dan = create_account("dan");
auto bob = create_account("bob");
generate_block();
fc::usleep(fc::milliseconds(1000));
// for later use
//int asset_crobjeate_op_id = operation::tag<asset_create_operation>::value;
//int account_create_op_id = operation::tag<account_create_operation>::value;
string query = "{ \"query\" : { \"bool\" : { \"must\" : [{\"match_all\": {}}] } } }";
es.endpoint = es.index_prefix + "*/data/_count";
es.query = query;
auto res = graphene::utilities::simpleQuery(es);
variant j = fc::json::from_string(res);
auto total = j["count"].as_string();
BOOST_CHECK_EQUAL(total, "5");
es.endpoint = es.index_prefix + "*/data/_search";
res = graphene::utilities::simpleQuery(es);
j = fc::json::from_string(res);
auto first_id = j["hits"]["hits"][size_t(0)]["_id"].as_string();
BOOST_CHECK_EQUAL(first_id, "2.9.0");
generate_block();
auto willie = create_account("willie");
generate_block();
fc::usleep(fc::milliseconds(1000)); // index.refresh_interval
es.endpoint = es.index_prefix + "*/data/_count";
res = graphene::utilities::simpleQuery(es);
j = fc::json::from_string(res);
total = j["count"].as_string();
BOOST_CHECK_EQUAL(total, "7");
// do some transfers in 1 block
transfer(account_id_type()(db), bob, asset(100));
transfer(account_id_type()(db), bob, asset(200));
transfer(account_id_type()(db), bob, asset(300));
generate_block();
fc::usleep(fc::milliseconds(1000)); // index.refresh_interval
res = graphene::utilities::simpleQuery(es);
j = fc::json::from_string(res);
total = j["count"].as_string();
BOOST_CHECK_EQUAL(total, "13");
// check the visitor data
auto block_date = db.head_block_time();
std::string index_name = graphene::utilities::generateIndexName(block_date, "peerplays-");
es.endpoint = index_name + "/data/2.9.12"; // we know last op is a transfer of amount 300
res = graphene::utilities::getEndPoint(es);
j = fc::json::from_string(res);
auto last_transfer_amount = j["_source"]["operation_history"]["op_object"]["amount_"]["amount"].as_string();
BOOST_CHECK_EQUAL(last_transfer_amount, "300");
}
}
catch (fc::exception &e) {
edump((e.to_detail_string()));
throw;
}
}
BOOST_AUTO_TEST_CASE(elasticsearch_objects) {
try {
CURL *curl; // curl handler
curl = curl_easy_init();
graphene::utilities::ES es;
es.curl = curl;
es.elasticsearch_url = "http://localhost:9200/";
es.index_prefix = "ppobjects-";
//es.auth = "elastic:changeme";
// delete all first
auto delete_objects = graphene::utilities::deleteAll(es);
generate_block();
fc::usleep(fc::milliseconds(1000));
if(delete_objects) { // all records deleted
// asset and bitasset
create_bitasset("USD", account_id_type());
generate_block();
fc::usleep(fc::milliseconds(1000));
string query = "{ \"query\" : { \"bool\" : { \"must\" : [{\"match_all\": {}}] } } }";
es.endpoint = es.index_prefix + "*/data/_count";
es.query = query;
auto res = graphene::utilities::simpleQuery(es);
variant j = fc::json::from_string(res);
auto total = j["count"].as_string();
BOOST_CHECK_EQUAL(total, "2");
es.endpoint = es.index_prefix + "asset/data/_search";
res = graphene::utilities::simpleQuery(es);
j = fc::json::from_string(res);
auto first_id = j["hits"]["hits"][size_t(0)]["_source"]["symbol"].as_string();
BOOST_CHECK_EQUAL(first_id, "USD");
auto bitasset_data_id = j["hits"]["hits"][size_t(0)]["_source"]["bitasset_data_id"].as_string();
es.endpoint = es.index_prefix + "bitasset/data/_search";
es.query = "{ \"query\" : { \"bool\": { \"must\" : [{ \"term\": { \"object_id\": \""+bitasset_data_id+"\"}}] } } }";
res = graphene::utilities::simpleQuery(es);
j = fc::json::from_string(res);
auto bitasset_object_id = j["hits"]["hits"][size_t(0)]["_source"]["object_id"].as_string();
BOOST_CHECK_EQUAL(bitasset_object_id, bitasset_data_id);
}
}
catch (fc::exception &e) {
edump((e.to_detail_string()));
throw;
}
}
BOOST_AUTO_TEST_CASE(elasticsearch_suite) {
try {
CURL *curl; // curl handler
curl = curl_easy_init();
graphene::utilities::ES es;
es.curl = curl;
es.elasticsearch_url = "http://localhost:9200/";
es.index_prefix = "peerplays-";
auto delete_account_history = graphene::utilities::deleteAll(es);
fc::usleep(fc::milliseconds(1000));
es.index_prefix = "ppobjects-";
auto delete_objects = graphene::utilities::deleteAll(es);
fc::usleep(fc::milliseconds(1000));
if(delete_account_history && delete_objects) { // all records deleted
}
}
catch (fc::exception &e) {
edump((e.to_detail_string()));
throw;
}
}
BOOST_AUTO_TEST_CASE(elasticsearch_history_api) {
try {
CURL *curl; // curl handler
curl = curl_easy_init();
graphene::utilities::ES es;
es.curl = curl;
es.elasticsearch_url = "http://localhost:9200/";
es.index_prefix = "peerplays-";
auto delete_account_history = graphene::utilities::deleteAll(es);
generate_block();
fc::usleep(fc::milliseconds(1000));
if(delete_account_history) {
create_bitasset("USD", account_id_type()); // create op 0
const account_object& dan = create_account("dan"); // create op 1
create_bitasset("CNY", dan.id); // create op 2
create_bitasset("BTC", account_id_type()); // create op 3
create_bitasset("XMR", dan.id); // create op 4
create_bitasset("EUR", account_id_type()); // create op 5
create_bitasset("OIL", dan.id); // create op 6
generate_block();
fc::usleep(fc::milliseconds(1000));
graphene::app::history_api hist_api(app);
app.enable_plugin("elasticsearch");
// f(A, 0, 4, 9) = { 5, 3, 1, 0 }
auto histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 4, operation_history_id_type(9));
BOOST_CHECK_EQUAL(histories.size(), 4u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[3].id.instance(), 0u);
// f(A, 0, 4, 6) = { 5, 3, 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 4, operation_history_id_type(6));
BOOST_CHECK_EQUAL(histories.size(), 4u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[3].id.instance(), 0u);
// f(A, 0, 4, 5) = { 5, 3, 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 4, operation_history_id_type(5));
BOOST_CHECK_EQUAL(histories.size(), 4u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[3].id.instance(), 0u);
// f(A, 0, 4, 4) = { 3, 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 4, operation_history_id_type(4));
BOOST_CHECK_EQUAL(histories.size(), 3u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 0u);
// f(A, 0, 4, 3) = { 3, 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 4, operation_history_id_type(3));
BOOST_CHECK_EQUAL(histories.size(), 3u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 0u);
// f(A, 0, 4, 2) = { 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 4, operation_history_id_type(2));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 0u);
// f(A, 0, 4, 1) = { 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 4, operation_history_id_type(1));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 0u);
// f(A, 0, 4, 0) = { 5, 3, 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 4, operation_history_id_type());
BOOST_CHECK_EQUAL(histories.size(), 4u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[3].id.instance(), 0u);
// f(A, 1, 5, 9) = { 5, 3 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(1), 5, operation_history_id_type(9));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
// f(A, 1, 5, 6) = { 5, 3 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(1), 5, operation_history_id_type(6));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
// f(A, 1, 5, 5) = { 5, 3 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(1), 5, operation_history_id_type(5));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
// f(A, 1, 5, 4) = { 3 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(1), 5, operation_history_id_type(4));
BOOST_CHECK_EQUAL(histories.size(), 1u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 3u);
// f(A, 1, 5, 3) = { 3 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(1), 5, operation_history_id_type(3));
BOOST_CHECK_EQUAL(histories.size(), 1u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 3u);
// f(A, 1, 5, 2) = { }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(1), 5, operation_history_id_type(2));
BOOST_CHECK_EQUAL(histories.size(), 0u);
// f(A, 1, 5, 1) = { }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(1), 5, operation_history_id_type(1));
BOOST_CHECK_EQUAL(histories.size(), 0u);
// f(A, 1, 5, 0) = { 5, 3 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(1), 5, operation_history_id_type(0));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
// f(A, 0, 3, 9) = { 5, 3, 1 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 3, operation_history_id_type(9));
BOOST_CHECK_EQUAL(histories.size(), 3u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 1u);
// f(A, 0, 3, 6) = { 5, 3, 1 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 3, operation_history_id_type(6));
BOOST_CHECK_EQUAL(histories.size(), 3u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 1u);
// f(A, 0, 3, 5) = { 5, 3, 1 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 3, operation_history_id_type(5));
BOOST_CHECK_EQUAL(histories.size(), 3u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 1u);
// f(A, 0, 3, 4) = { 3, 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 3, operation_history_id_type(4));
BOOST_CHECK_EQUAL(histories.size(), 3u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 0u);
// f(A, 0, 3, 3) = { 3, 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 3, operation_history_id_type(3));
BOOST_CHECK_EQUAL(histories.size(), 3u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 0u);
// f(A, 0, 3, 2) = { 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 3, operation_history_id_type(2));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 0u);
// f(A, 0, 3, 1) = { 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 3, operation_history_id_type(1));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 0u);
// f(A, 0, 3, 0) = { 5, 3, 1 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(), 3, operation_history_id_type());
BOOST_CHECK_EQUAL(histories.size(), 3u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 1u);
// f(B, 0, 4, 9) = { 6, 4, 2, 1 }
histories = hist_api.get_account_history("dan", operation_history_id_type(), 4, operation_history_id_type(9));
BOOST_CHECK_EQUAL(histories.size(), 4u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 6u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 4u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 2u);
BOOST_CHECK_EQUAL(histories[3].id.instance(), 1u);
// f(B, 0, 4, 6) = { 6, 4, 2, 1 }
histories = hist_api.get_account_history("dan", operation_history_id_type(), 4, operation_history_id_type(6));
BOOST_CHECK_EQUAL(histories.size(), 4u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 6u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 4u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 2u);
BOOST_CHECK_EQUAL(histories[3].id.instance(), 1u);
// f(B, 0, 4, 5) = { 4, 2, 1 }
histories = hist_api.get_account_history("dan", operation_history_id_type(), 4, operation_history_id_type(5));
BOOST_CHECK_EQUAL(histories.size(), 3u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 4u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 2u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 1u);
// f(B, 0, 4, 4) = { 4, 2, 1 }
histories = hist_api.get_account_history("dan", operation_history_id_type(), 4, operation_history_id_type(4));
BOOST_CHECK_EQUAL(histories.size(), 3u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 4u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 2u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 1u);
// f(B, 0, 4, 3) = { 2, 1 }
histories = hist_api.get_account_history("dan", operation_history_id_type(), 4, operation_history_id_type(3));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 2u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 1u);
// f(B, 0, 4, 2) = { 2, 1 }
histories = hist_api.get_account_history("dan", operation_history_id_type(), 4, operation_history_id_type(2));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 2u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 1u);
// f(B, 0, 4, 1) = { 1 }
histories = hist_api.get_account_history("dan", operation_history_id_type(), 4, operation_history_id_type(1));
BOOST_CHECK_EQUAL(histories.size(), 1u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 1u);
// f(B, 0, 4, 0) = { 6, 4, 2, 1 }
histories = hist_api.get_account_history("dan", operation_history_id_type(), 4, operation_history_id_type());
BOOST_CHECK_EQUAL(histories.size(), 4u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 6u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 4u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 2u);
BOOST_CHECK_EQUAL(histories[3].id.instance(), 1u);
// f(B, 2, 4, 9) = { 6, 4 }
histories = hist_api.get_account_history("dan", operation_history_id_type(2), 4, operation_history_id_type(9));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 6u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 4u);
// f(B, 2, 4, 6) = { 6, 4 }
histories = hist_api.get_account_history("dan", operation_history_id_type(2), 4, operation_history_id_type(6));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 6u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 4u);
// f(B, 2, 4, 5) = { 4 }
histories = hist_api.get_account_history("dan", operation_history_id_type(2), 4, operation_history_id_type(5));
BOOST_CHECK_EQUAL(histories.size(), 1u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 4u);
// f(B, 2, 4, 4) = { 4 }
histories = hist_api.get_account_history("dan", operation_history_id_type(2), 4, operation_history_id_type(4));
BOOST_CHECK_EQUAL(histories.size(), 1u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 4u);
// f(B, 2, 4, 3) = { }
histories = hist_api.get_account_history("dan", operation_history_id_type(2), 4, operation_history_id_type(3));
BOOST_CHECK_EQUAL(histories.size(), 0u);
// f(B, 2, 4, 2) = { }
histories = hist_api.get_account_history("dan", operation_history_id_type(2), 4, operation_history_id_type(2));
BOOST_CHECK_EQUAL(histories.size(), 0u);
// f(B, 2, 4, 1) = { }
histories = hist_api.get_account_history("dan", operation_history_id_type(2), 4, operation_history_id_type(1));
BOOST_CHECK_EQUAL(histories.size(), 0u);
// f(B, 2, 4, 0) = { 6, 4 }
histories = hist_api.get_account_history("dan", operation_history_id_type(2), 4, operation_history_id_type(0));
BOOST_CHECK_EQUAL(histories.size(), 2u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 6u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 4u);
// 0 limits
histories = hist_api.get_account_history("dan", operation_history_id_type(0), 0, operation_history_id_type(0));
BOOST_CHECK_EQUAL(histories.size(), 0u);
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(3), 0, operation_history_id_type(9));
BOOST_CHECK_EQUAL(histories.size(), 0u);
// non existent account
histories = hist_api.get_account_history("1.2.18", operation_history_id_type(0), 4, operation_history_id_type(0));
BOOST_CHECK_EQUAL(histories.size(), 0u);
// create a new account C = alice { 7 }
auto alice = create_account("alice");
generate_block();
fc::usleep(fc::milliseconds(1000));
// f(C, 0, 4, 10) = { 7 }
histories = hist_api.get_account_history("alice", operation_history_id_type(0), 4, operation_history_id_type(10));
BOOST_CHECK_EQUAL(histories.size(), 1u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 7u);
// f(C, 8, 4, 10) = { }
histories = hist_api.get_account_history("alice", operation_history_id_type(8), 4, operation_history_id_type(10));
BOOST_CHECK_EQUAL(histories.size(), 0u);
// f(A, 0, 10, 0) = { 7, 5, 3, 1, 0 }
histories = hist_api.get_account_history("1.2.0", operation_history_id_type(0), 10, operation_history_id_type(0));
BOOST_CHECK_EQUAL(histories.size(), 5u);
BOOST_CHECK_EQUAL(histories[0].id.instance(), 7u);
BOOST_CHECK_EQUAL(histories[1].id.instance(), 5u);
BOOST_CHECK_EQUAL(histories[2].id.instance(), 3u);
BOOST_CHECK_EQUAL(histories[3].id.instance(), 1u);
BOOST_CHECK_EQUAL(histories[4].id.instance(), 0u);
}
}
catch (fc::exception &e) {
edump((e.to_detail_string()));
throw;
}
}
BOOST_AUTO_TEST_SUITE_END()