peerplays_migrated/libraries/plugins/es_objects/es_objects.cpp
pbattu123 be14592ea8
Merge Plugins and graphene update changes from beatrice TESTNET to master (#317)
* increase delay for node connection

* remove cache from cli get_account

* add cli tests framework

* Adjust newly merged code to new API

* Merged changes from Bitshares PR 1036

* GRPH-76 - Short-cut long sequences of missed blocks

Fixes database::update_global_dynamic_data to speed up counting missed blocks.
(This also fixes a minor issue with counting - the previous algorithm would skip missed blocks for the witness who signed the first block after the gap.)

* Improved resilience of block database against corruption

* Moved reindex logic into database / chain_database, make use of additional blocks in block_database

Fixed tests wrt db.open

* Enable undo + fork database for final blocks in a replay

Dont remove blocks from block db when popping blocks, handle edge case in replay wrt fork_db, adapted unit tests

* Log starting block number of replay

* Prevent unsigned integer underflow

* Fixed lock detection

* Dont leave _data_dir empty if db is locked

* Writing the object_database is now almost atomic

* Improved consistency check for block_log

* Cut back block_log index file if inconsistent

* Fixed undo_database

* Added test case for broken merge on empty undo_db

* exclude second undo_db.enable() call in some cases

* Add missing change

* change bitshares to core in message

* Merge pull request #938 from bitshares/fix-block-storing

Store correct block ID when switching forks

* Fixed integer overflow issue

* Fix for for history ID mismatch ( Bitshares PR #875 )

* Update the FC submodule with the changes for GRPH-4

* Merged Bitshares PR #1462 and compilation fixes

* Support/gitlab (#123)

* Updated gitlab process

* Fix undefined references in cli test

* Updated GitLab CI

* Fix #436 object_database created outside of witness data directory

* supplement more comments on database::_opened variable

* prevent segfault when destructing application obj

* Fixed test failures and compilation issue

* minor performance improvement

* Added comment

* Fix compilation in debug mode

* Fixed duplicate ops returned from get_account_history

* Fixed account_history_pagination test

* Removed unrelated comment

* Update to fixed version of fc

* Skip auth check when pushing self-generated blocks

* Extract public keys before pushing a transaction

* Dereference chain_database shared_ptr

* Updated transaction::signees to mutable

and
* updated get_signature_keys() to return a const reference,
* get_signature_keys() will update signees on first call,
* modified test cases and wallet.cpp accordingly,
* no longer construct a new signed_transaction object before pushing

* Added get_asset_count API

* No longer extract public keys before pushing a trx

and removed unused new added constructor and _get_signature_keys() function from signed_transaction struct

* changes to withdraw_vesting feature(for both cdd and GPOS)

* Comments update

* update to GPOS hardfork ref

* Remove leftover comment from merge

* fix for get_vesting_balance API call

* braces update

* Allow sufficient space for new undo_session

* Throw for deep nesting

* node.cpp: Check the attacker/buggy client before updating items ids

The peer is an attacker or buggy, which means the item_hashes_received is
not correct.

Move the check before updating items ids to save some time in this case.

* Create .gitlab-ci.yml

* Added cli_test to CI

* fixing build errors (#150)

* fixing build errors

vest type correction

* fixing build errors

vest type correction

* fixes 

new Dockerfile

* vesting_balance_type correction

vesting_balance_type changed to normal

* gcc5 support to Dockerfile

gcc5 support to Dockerfile

* use random port numbers in app_test (#154)

* Changes to compiple with GCC 7(Ubuntu 18.04)

* proposal fail_reason bug fixed (#157)

* Added Sonarcloud code_quality to CI (#159)

* Added sonarcloud analysis (#158)

* changes to have separate methods and single withdrawl fee for multiple vest objects

* 163-fix, Return only non-zero vesting balances

* Support/gitlab develop (#168)

* Added code_quality to CI

* Update .gitlab-ci.yml

* Point to PBSA/peerplays-fc commit f13d063 (#167)

* [GRPH-3] Additional cli tests (#155)

* Additional cli tests

* Compatible with latest fc changes

* Fixed Spacing issues

* [GRPH-106] Added voting tests (#136)

* Added more voting tests

* Added additional option

* Adjust p2p log level (#180)

* Added submodule sync to peerplays compile process

* merge gpos to develop (#186)

* issue - 154: Don't allow to vote when vesting balance is 0

* changes to withdraw_vesting feature(for both cdd and GPOS)

* Comments update

* update to GPOS hardfork ref

* fix for get_vesting_balance API call

* braces update

* Create .gitlab-ci.yml

* fixing build errors (#150)

* fixing build errors

vest type correction

* fixing build errors

vest type correction

* fixes 

new Dockerfile

* vesting_balance_type correction

vesting_balance_type changed to normal

* gcc5 support to Dockerfile

gcc5 support to Dockerfile

* Changes to compiple with GCC 7(Ubuntu 18.04)

* changes to have separate methods and single withdrawl fee for multiple vest objects

* 163-fix, Return only non-zero vesting balances

* Revert "Revert "GPOS protocol""

This reverts commit 67616417b7.

* add new line needed to gpos hardfork file

* comment temporally cli_vote_for_2_witnesses until refactor or delete

* fix gpos tests

* fix gitlab-ci conflict

* Fixed few error messages

* error message corrections at other places

* Updated FC repository to peerplays-network/peerplays-fc (#189)

Point to fc commit hash 6096e94 [latest-fc branch]

* Project name update in Doxyfile (#146)

* changes to allow user to vote in each sub-period

* Fixed GPOS vesting factor issue when proxy is set

* Added unit test for proxy voting

* Review changes

* changes to update last voting time

* resolve merge conflict

* unit test changes and also separated GPOS test suite

* delete unused variables

* removed witness check

* eliminate time gap between two consecutive vesting periods

* deleted GPOS specific test suite and updated gpos tests

* updated GPOS hf

* Fixed dividend distribution issue and added test case

* fix flag

* clean newlines gpos_tests

* adapt gpos_tests to changed flag

* Fix to roll in GPOS rules, carry votes from 6th sub-period

* check was already modified

* comments updated

* updated comments to the benefit of reviewer

* Added token symbol name in error messages

* Added token symbol name in error messages (#204)

* case 1: Fixed last voting time issue

* get_account bug fixed

* Fixed flag issue

* Fixed spelling issue

* remove non needed gcc5 changes to dockerfile

* GRPH134- High CPU Issue, websocket changes (#213)

* update submodule branch to refer to the latest commit on latest-fc branch (#214)

* Improve account maintenance performance (#130)

* Improve account maintenance performance

* merge fixes

* Fixed merge issue

* Fixed indentations and extra ';'

* Update CI for syncing gitmodules (#216)

* Added logging for the old update_expired_feeds bug

The old bug is https://github.com/cryptonomex/graphene/issues/615 .

Due to the bug, `update_median_feeds()` and `check_call_orders()`
will be called when a feed is not actually expired, normally this
should not affect consensus since calling them should not change
any data in the state.

However, the logging indicates that `check_call_orders()` did
change some data under certain circumstances, specifically, when
multiple limit order matching issue (#453) occurred at same block.
* https://github.com/bitshares/bitshares-core/issues/453

* Minor performance improvement for price::is_null()

* Use static refs in db_getter for immutable objects

* Minor performance improvement for db_maint

* Minor code updates for asset_evaluator.cpp

* changed an `assert()` to `FC_ASSERT()`
* replaced one `db.get(asset_id_type())` with `db.get_core_asset()`
* capture only required variables for lambda

* Improve update_expired_feeds performance #1093

* Change static refs to member pointers of db class

* Added getter for witness schedule object

* Added getter for core dynamic data object

* Use getters

* Removed unused variable

* Add comments for update_expired_feeds in db_block

* Minor refactory asset_create_evaluator::do_apply()

* Added FC_ASSERT for dynamic data id of core asset

* Added header inclusions in db_management.cpp

* fix global objects usage during replay

* Logging config parsing issue

* added new files

* compilation fix

* Simplified code in database::pay_workers()

* issue with withdrawl

* Added unit test for empty account history

* set extensions default values

* Update GPOS hardfork date and don't allow GPOS features before hardfork time

* refer to latest commit of latest-fc branch (#224)

* account name or id support in all database api

* asset id or name support in all asset APIs

* Fixed compilation issues

* Fixed alignment issues

* Externalized some API templates

* Externalize serialization of blocks, tx, ops

* Externalized db objects

* Externalized genesis serialization

* Externalized serialization in protocol library

* Undo superfluous change

* remove default value for extension parameter

* fix compilation issues

* GRPH-46-Quit_command_cliwallet

* removed multiple function definition

* Fixed chainparameter update proposal issue

* Move GPOS withdraw logic to have single transaction(also single fee) and update API

* Added log for authorization failure of proposal operations

* Votes consideration on GPOS activation

* bump fc version

* fix gpos tests

* Bump fc version

* Updated gpos/voting_tests

* Fixed withdraw vesting bug

* Added unit test

* Update hardfork date for TESTNET, sync fc module and update logs

* avoid wlog as it filling up space

* Beatrice hot fix(sync issue fix)

* gpos tests fix

* Set hardfork date to Jan5th on TESTNET

* Merge Elasticplugin, snapshot plugin and graphene updates to beatrice (#304)

* check witness signature before adding block to fork db

* Replace verify_no_send_in_progress with no_parallel_execution_guard

* fixed cli_wallet log issue

* Port plugin sanitization code

* avoid directly overwriting wallet file

* Implemented "plugins" config variable

* allow plugin to have descriptions

* Merge pull request #444 from oxarbitrage/elasticsearch

Elasticsearch plugin

* Merge pull request #500 from oxarbitrage/elasticsearch-extras

es_objects plugin

* Merge pull request #873 from pmconrad/585_fix_history_ids

Fix history ids

* Merge pull request #1201 from oxarbitrage/elasticsearch_tests2

Elasticsearch refactor

* Merge pull request #1271 from oxarbitrage/es_objects

refine es_objects plugin

* Merge pull request #1429 from oxarbitrage/es_objects_templates

Add an adaptor to es_objects and template function to reduce code

* Merge pull request #1458 from oxarbitrage/issue1455

add option elasticsearch-start-es-after-block to es plugin

* Merge pull request #1541 from oxarbitrage/es_objects_start_after_block

add es-objects-start-es-after-block option

* explicitly cleanup external library facilities

* Merge pull request #1717 from oxarbitrage/issue1652

add genesis data to es_objects

* Merge pull request #1073 from xiangxn/merge-impacted

merge impacted into db_notify

* Merge pull request #1725 from oxarbitrage/issue1682

elasticsearch history api #1682

* change ES index prefixes to Peerplays-specific

* sync develop with beatrice

* fix the data writing to ES during sync issues

* fix CLI tests

* brought updates from mainnet branch (#285)

* Fix unit test failures (#289)

* fixed unit test failures from the recent merges

* fixed unit test failures from the recent merges

* enable snapshot plugin (#288)

* sync fc branch(build optimization changes)

* update to es plugin

* fix verify witness signature method (#295)

* enable mandatory plugins to have smooth transition for next release

* updated tests to keep in-line with plugin changes

Co-authored-by: Sandip Patel <sandip@knackroot.com>
Co-authored-by: Peter Conrad <conrad@quisquis.de>
Co-authored-by: Alfredo <oxarbitrage@gmail.com>
Co-authored-by: Abit <abitmore@users.noreply.github.com>
Co-authored-by: crypto-ape <43807588+crypto-ape@users.noreply.github.com>
Co-authored-by: gladcow <s.gladkov@pbsa.info>

* sync latest fc commit on beatrice

* sweeps winner_ticket_id changes

Co-authored-by: Bobinson K B <bobinson@gmail.com>
Co-authored-by: gladcow <s.gladkov@pbsa.info>
Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com>
Co-authored-by: Miha Čančula <miha@noughmad.eu>
Co-authored-by: Ronak Patel <r.patel@pbsa.info>
Co-authored-by: Srdjan Obucina <obucinac@gmail.com>
Co-authored-by: Peter Conrad <conrad@quisquis.de>
Co-authored-by: Peter Conrad <cyrano@quisquis.de>
Co-authored-by: Abit <abitmore@users.noreply.github.com>
Co-authored-by: Roshan Syed <r.syed@pbsa.info>
Co-authored-by: cifer <maintianyu@gmail.com>
Co-authored-by: John Jones <jmjatlanta@gmail.com>
Co-authored-by: Sandip Patel <sandip@knackroot.com>
Co-authored-by: Wei Yang <richard.weiyang@gmail.com>
Co-authored-by: gladcow <jahr@yandex.ru>
Co-authored-by: satyakoneru <satyakoneru.iiith@gmail.com>
Co-authored-by: crypto-ape <43807588+crypto-ape@users.noreply.github.com>
2020-04-15 20:34:15 +05:30

401 lines
15 KiB
C++

/*
* Copyright (c) 2018 oxarbitrage, and contributors.
*
* The MIT License
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <graphene/es_objects/es_objects.hpp>
#include <fc/smart_ref_impl.hpp>
#include <curl/curl.h>
#include <graphene/chain/proposal_object.hpp>
#include <graphene/chain/balance_object.hpp>
#include <graphene/chain/market_object.hpp>
#include <graphene/chain/asset_object.hpp>
#include <graphene/chain/account_object.hpp>
#include <graphene/utilities/elasticsearch.hpp>
namespace graphene { namespace es_objects {
namespace detail
{
class es_objects_plugin_impl
{
public:
es_objects_plugin_impl(es_objects_plugin& _plugin)
: _self( _plugin )
{ curl = curl_easy_init(); }
virtual ~es_objects_plugin_impl();
bool index_database(const vector<object_id_type>& ids, std::string action);
bool genesis();
void remove_from_database(object_id_type id, std::string index);
es_objects_plugin& _self;
std::string _es_objects_elasticsearch_url = "http://localhost:9200/";
std::string _es_objects_auth = "";
uint32_t _es_objects_bulk_replay = 10000;
uint32_t _es_objects_bulk_sync = 100;
bool _es_objects_proposals = true;
bool _es_objects_accounts = true;
bool _es_objects_assets = true;
bool _es_objects_balances = true;
bool _es_objects_limit_orders = true;
bool _es_objects_asset_bitasset = true;
std::string _es_objects_index_prefix = "ppobjects-";
uint32_t _es_objects_start_es_after_block = 0;
CURL *curl; // curl handler
vector <std::string> bulk;
vector<std::string> prepare;
bool _es_objects_keep_only_current = true;
uint32_t block_number;
fc::time_point_sec block_time;
private:
template<typename T>
void prepareTemplate(T blockchain_object, string index_name);
};
bool es_objects_plugin_impl::genesis()
{
ilog("elasticsearch OBJECTS: inserting data from genesis");
graphene::chain::database &db = _self.database();
block_number = db.head_block_num();
block_time = db.head_block_time();
if (_es_objects_accounts) {
auto &index_accounts = db.get_index(1, 2);
index_accounts.inspect_all_objects([this, &db](const graphene::db::object &o) {
auto obj = db.find_object(o.id);
auto a = static_cast<const account_object *>(obj);
prepareTemplate<account_object>(*a, "account");
});
}
if (_es_objects_assets) {
auto &index_assets = db.get_index(1, 3);
index_assets.inspect_all_objects([this, &db](const graphene::db::object &o) {
auto obj = db.find_object(o.id);
auto a = static_cast<const asset_object *>(obj);
prepareTemplate<asset_object>(*a, "asset");
});
}
if (_es_objects_balances) {
auto &index_balances = db.get_index(2, 5);
index_balances.inspect_all_objects([this, &db](const graphene::db::object &o) {
auto obj = db.find_object(o.id);
auto b = static_cast<const account_balance_object *>(obj);
prepareTemplate<account_balance_object>(*b, "balance");
});
}
graphene::utilities::ES es;
es.curl = curl;
es.bulk_lines = bulk;
es.elasticsearch_url = _es_objects_elasticsearch_url;
es.auth = _es_objects_auth;
if (!graphene::utilities::SendBulk(es))
FC_THROW_EXCEPTION(fc::exception, "Error inserting genesis data.");
else
bulk.clear();
return true;
}
bool es_objects_plugin_impl::index_database(const vector<object_id_type>& ids, std::string action)
{
graphene::chain::database &db = _self.database();
block_time = db.head_block_time();
block_number = db.head_block_num();
if(block_number > _es_objects_start_es_after_block) {
// check if we are in replay or in sync and change number of bulk documents accordingly
uint32_t limit_documents = 0;
if ((fc::time_point::now() - block_time) < fc::seconds(30))
limit_documents = _es_objects_bulk_sync;
else
limit_documents = _es_objects_bulk_replay;
for (auto const &value: ids) {
if (value.is<proposal_object>() && _es_objects_proposals) {
auto obj = db.find_object(value);
auto p = static_cast<const proposal_object *>(obj);
if (p != nullptr) {
if (action == "delete")
remove_from_database(p->id, "proposal");
else
prepareTemplate<proposal_object>(*p, "proposal");
}
} else if (value.is<account_object>() && _es_objects_accounts) {
auto obj = db.find_object(value);
auto a = static_cast<const account_object *>(obj);
if (a != nullptr) {
if (action == "delete")
remove_from_database(a->id, "account");
else
prepareTemplate<account_object>(*a, "account");
}
} else if (value.is<asset_object>() && _es_objects_assets) {
auto obj = db.find_object(value);
auto a = static_cast<const asset_object *>(obj);
if (a != nullptr) {
if (action == "delete")
remove_from_database(a->id, "asset");
else
prepareTemplate<asset_object>(*a, "asset");
}
} else if (value.is<account_balance_object>() && _es_objects_balances) {
auto obj = db.find_object(value);
auto b = static_cast<const account_balance_object *>(obj);
if (b != nullptr) {
if (action == "delete")
remove_from_database(b->id, "balance");
else
prepareTemplate<account_balance_object>(*b, "balance");
}
} else if (value.is<limit_order_object>() && _es_objects_limit_orders) {
auto obj = db.find_object(value);
auto l = static_cast<const limit_order_object *>(obj);
if (l != nullptr) {
if (action == "delete")
remove_from_database(l->id, "limitorder");
else
prepareTemplate<limit_order_object>(*l, "limitorder");
}
} else if (value.is<asset_bitasset_data_object>() && _es_objects_asset_bitasset) {
auto obj = db.find_object(value);
auto ba = static_cast<const asset_bitasset_data_object *>(obj);
if (ba != nullptr) {
if (action == "delete")
remove_from_database(ba->id, "bitasset");
else
prepareTemplate<asset_bitasset_data_object>(*ba, "bitasset");
}
}
}
if (curl && bulk.size() >= limit_documents) { // we are in bulk time, ready to add data to elasticsearech
graphene::utilities::ES es;
es.curl = curl;
es.bulk_lines = bulk;
es.elasticsearch_url = _es_objects_elasticsearch_url;
es.auth = _es_objects_auth;
if (!graphene::utilities::SendBulk(es))
return false;
else
bulk.clear();
}
}
return true;
}
void es_objects_plugin_impl::remove_from_database( object_id_type id, std::string index)
{
if(_es_objects_keep_only_current)
{
fc::mutable_variant_object delete_line;
delete_line["_id"] = string(id);
delete_line["_index"] = _es_objects_index_prefix + index;
delete_line["_type"] = "data";
fc::mutable_variant_object final_delete_line;
final_delete_line["delete"] = delete_line;
prepare.push_back(fc::json::to_string(final_delete_line));
std::move(prepare.begin(), prepare.end(), std::back_inserter(bulk));
prepare.clear();
}
}
template<typename T>
void es_objects_plugin_impl::prepareTemplate(T blockchain_object, string index_name)
{
fc::mutable_variant_object bulk_header;
bulk_header["_index"] = _es_objects_index_prefix + index_name;
bulk_header["_type"] = "data";
if(_es_objects_keep_only_current)
{
bulk_header["_id"] = string(blockchain_object.id);
}
adaptor_struct adaptor;
fc::variant blockchain_object_variant;
fc::to_variant( blockchain_object, blockchain_object_variant, GRAPHENE_NET_MAX_NESTED_OBJECTS );
fc::mutable_variant_object o = adaptor.adapt(blockchain_object_variant.get_object());
o["object_id"] = string(blockchain_object.id);
o["block_time"] = block_time;
o["block_number"] = block_number;
string data = fc::json::to_string(o);
prepare = graphene::utilities::createBulk(bulk_header, std::move(data));
std::move(prepare.begin(), prepare.end(), std::back_inserter(bulk));
prepare.clear();
}
es_objects_plugin_impl::~es_objects_plugin_impl()
{
if (curl) {
curl_easy_cleanup(curl);
curl = nullptr;
}
return;
}
} // end namespace detail
es_objects_plugin::es_objects_plugin() :
my( new detail::es_objects_plugin_impl(*this) )
{
}
es_objects_plugin::~es_objects_plugin()
{
}
std::string es_objects_plugin::plugin_name()const
{
return "es_objects";
}
std::string es_objects_plugin::plugin_description()const
{
return "Stores blockchain objects in ES database. Experimental.";
}
void es_objects_plugin::plugin_set_program_options(
boost::program_options::options_description& cli,
boost::program_options::options_description& cfg
)
{
cli.add_options()
("es-objects-elasticsearch-url", boost::program_options::value<std::string>(), "Elasticsearch node url(http://localhost:9200/)")
("es-objects-auth", boost::program_options::value<std::string>(), "Basic auth username:password('')")
("es-objects-bulk-replay", boost::program_options::value<uint32_t>(), "Number of bulk documents to index on replay(10000)")
("es-objects-bulk-sync", boost::program_options::value<uint32_t>(), "Number of bulk documents to index on a synchronized chain(100)")
("es-objects-proposals", boost::program_options::value<bool>(), "Store proposal objects(true)")
("es-objects-accounts", boost::program_options::value<bool>(), "Store account objects(true)")
("es-objects-assets", boost::program_options::value<bool>(), "Store asset objects(true)")
("es-objects-balances", boost::program_options::value<bool>(), "Store balances objects(true)")
("es-objects-limit-orders", boost::program_options::value<bool>(), "Store limit order objects(true)")
("es-objects-asset-bitasset", boost::program_options::value<bool>(), "Store feed data(true)")
("es-objects-index-prefix", boost::program_options::value<std::string>(), "Add a prefix to the index(ppobjects-)")
("es-objects-keep-only-current", boost::program_options::value<bool>(), "Keep only current state of the objects(true)")
("es-objects-start-es-after-block", boost::program_options::value<uint32_t>(), "Start doing ES job after block(0)")
;
cfg.add(cli);
}
void es_objects_plugin::plugin_initialize(const boost::program_options::variables_map& options)
{
database().applied_block.connect([this](const signed_block &b) {
if(b.block_num() == 1) {
if (!my->genesis())
FC_THROW_EXCEPTION(fc::exception, "Error populating genesis data.");
}
});
database().new_objects.connect([this]( const vector<object_id_type>& ids, const flat_set<account_id_type>& impacted_accounts ) {
if(!my->index_database(ids, "create"))
{
FC_THROW_EXCEPTION(fc::exception, "Error creating object from ES database, we are going to keep trying.");
}
});
database().changed_objects.connect([this]( const vector<object_id_type>& ids, const flat_set<account_id_type>& impacted_accounts ) {
if(!my->index_database(ids, "update"))
{
FC_THROW_EXCEPTION(fc::exception, "Error updating object from ES database, we are going to keep trying.");
}
});
database().removed_objects.connect([this](const vector<object_id_type>& ids, const vector<const object*>& objs, const flat_set<account_id_type>& impacted_accounts) {
if(!my->index_database(ids, "delete"))
{
FC_THROW_EXCEPTION(fc::exception, "Error deleting object from ES database, we are going to keep trying.");
}
});
if (options.count("es-objects-elasticsearch-url")) {
my->_es_objects_elasticsearch_url = options["es-objects-elasticsearch-url"].as<std::string>();
}
if (options.count("es-objects-auth")) {
my->_es_objects_auth = options["es-objects-auth"].as<std::string>();
}
if (options.count("es-objects-bulk-replay")) {
my->_es_objects_bulk_replay = options["es-objects-bulk-replay"].as<uint32_t>();
}
if (options.count("es-objects-bulk-sync")) {
my->_es_objects_bulk_sync = options["es-objects-bulk-sync"].as<uint32_t>();
}
if (options.count("es-objects-proposals")) {
my->_es_objects_proposals = options["es-objects-proposals"].as<bool>();
}
if (options.count("es-objects-accounts")) {
my->_es_objects_accounts = options["es-objects-accounts"].as<bool>();
}
if (options.count("es-objects-assets")) {
my->_es_objects_assets = options["es-objects-assets"].as<bool>();
}
if (options.count("es-objects-balances")) {
my->_es_objects_balances = options["es-objects-balances"].as<bool>();
}
if (options.count("es-objects-limit-orders")) {
my->_es_objects_limit_orders = options["es-objects-limit-orders"].as<bool>();
}
if (options.count("es-objects-asset-bitasset")) {
my->_es_objects_asset_bitasset = options["es-objects-asset-bitasset"].as<bool>();
}
if (options.count("es-objects-index-prefix")) {
my->_es_objects_index_prefix = options["es-objects-index-prefix"].as<std::string>();
}
if (options.count("es-objects-keep-only-current")) {
my->_es_objects_keep_only_current = options["es-objects-keep-only-current"].as<bool>();
}
if (options.count("es-objects-start-es-after-block")) {
my->_es_objects_start_es_after_block = options["es-objects-start-es-after-block"].as<uint32_t>();
}
}
void es_objects_plugin::plugin_startup()
{
graphene::utilities::ES es;
es.curl = my->curl;
es.elasticsearch_url = my->_es_objects_elasticsearch_url;
es.auth = my->_es_objects_auth;
es.auth = my->_es_objects_index_prefix;
if(!graphene::utilities::checkES(es))
FC_THROW_EXCEPTION(fc::exception, "ES database is not up in url ${url}", ("url", my->_es_objects_elasticsearch_url));
ilog("elasticsearch OBJECTS: plugin_startup() begin");
}
} }