* fix rng and get_winner_numbers implemented * coipied code for bitshares fixing 429 and 433 isuues * ticket_purchase_operation implemented. added lottery_options to asset * lottery end implemented * minor logic changes. added db_api and cli_wallet methods * fix reindex on peerplays network * fix some tests. add gitlab-ci.yml * add pull to gitlab-ci * fix * fix and comment some tests * added owner to lottery_asset_options. commented async call in on_applied_block callback * added get_account_lotteries method to db_api and cli, lottery end_date and ticket_price verification * merge get_account_lotteries branch. fix create_witness test * fix test genesis and end_date verification * fixed indices sorting and lottery end checking by date * update db_version for replay and removed duplicate include files * Added ntp and upgraded boost version * Revert "GPOS protocol" * need to remove backup files * virtual-op-fix for deterministic virtual_op number * Merged beatrice into 5050 * Updated gitmodules, changes to allow voting on lottery fee * Removed submodule libraries/fc * Added libraries/fc * added missing , in types.hpp * Added sweeps parameters to parameter_extension * added missing comma in operations.hpp, small changes to config.hpp * fixed returntype in chain_parameters.hpp * removed sweeps_parameter_extensions * Changed fc library * fixed asset_object * Changed peerplays-fc submodule * Changed fc submodule to ubuntu 18.04 upgrade * Removed submodule libraries/fc * Added fc library back * fix casting in overloaded function * Removed blind_sign and unblind_signature functions * Added new lottery_asset_create_operation * Changed sweeps hardfork time * Removed redundant if from asset_evaluator and fixed db_notify * fixed duplicate code in fee_tests * removed redundant tgenesis file * Enable building on Ubuntu 18.04 using GCC 7 compiler * fix: is_benefactor_reward had the default value of true when not set * Docker file for Ubuntu 18.04 Base image updated to Unbuntu 18.04 Prerequisite list updated Basic configuration updated * Quick fix: Added missing package pkg-config * Docker file updates * 5050 fee update and compilation error fix * Dockerfile, set system locale Prevents locale::facet::_S_create_c_locale name error * Update README.md Fix typo * Update README.md * Changed hardfork time for SWEEPS and Core-429 * revert master changes that were brought in previous commit * Fixed error when account_history_object with id 0 doesnt exist * Fixed error while loading object database * test for zero id object in account history * Reorder operations in Dockerfile, to make image creation faster - Reorder prevents unnecessary building of Boost libraries * Fix for irrelevant signature included issue * fix copyrigth messages order * remove double empty lines * Backport fix for `get_account_history` from https://github.com/bitshares/bitshares-core/pull/628 and add additional account history test case * NTP client back * GRPH-53-Log_format_error * Merge pull request #1036 from jmjatlanta/issue_730 Add fail_reason to proposal_object * Unit test case fixes and prepared SONs base * Use offsetof instead of custom macro * Hide some compiler warnings * Make all the tests compile * Add nullptr check in api.cpp for easier testing * Add test case for broadcast_trx_with_callback API * Unit test case fixes and prepared SONs base * Merge pull request #714 from pmconrad/json_fix JSON fix * Increase max depth for trx confirmation callback * Adapt to variant API with `max_depth` argument * Update fc submodule * Created unit test for #325 * remove needless find() * GRPH-4-CliWallet_crash_ctrlD * fix copyright message * Make all the tests compile * increase delay for node connection * Increase block creation timeout to 2500ms * remove cache from cli get_account * add cli tests framework * Adjust newly merged code to new API * Improved resilience of block database against corruption * Merged changes from Bitshares PR 1036 * GRPH-76 - Short-cut long sequences of missed blocks Fixes database::update_global_dynamic_data to speed up counting missed blocks. (This also fixes a minor issue with counting - the previous algorithm would skip missed blocks for the witness who signed the first block after the gap.) * Moved reindex logic into database / chain_database, make use of additional blocks in block_database Fixed tests wrt db.open * Enable undo + fork database for final blocks in a replay Dont remove blocks from block db when popping blocks, handle edge case in replay wrt fork_db, adapted unit tests * Log starting block number of replay * Prevent unsigned integer underflow * Fixed lock detection * Dont leave _data_dir empty if db is locked * Writing the object_database is now almost atomic * Improved consistency check for block_log * Cut back block_log index file if inconsistent * Fixed undo_database * Added test case for broken merge on empty undo_db * Merge pull request #938 from bitshares/fix-block-storing Store correct block ID when switching forks * exclude second undo_db.enable() call in some cases * Add missing change * change bitshares to core in message * Fixed integer overflow issue * Fix for for history ID mismatch ( Bitshares PR #875 ) * Update the FC submodule with the changes for GRPH-4 * Fix #436 object_database created outside of witness data directory * supplement more comments on database::_opened variable * prevent segfault when destructing application obj * Fixed duplicate ops returned from get_account_history * minor performance improvement * Added comment * Merged Bitshares PR #1462 and compilation fixes * Support/gitlab (#123) * Updated gitlab process * Fix undefined references in cli test * Fixed test failures and compilation issue * Fixed account_history_pagination test * Fix compilation in debug mode * Removed unrelated comment * Skip auth check when pushing self-generated blocks * Extract public keys before pushing a transaction * Dereference chain_database shared_ptr * Updated transaction::signees to mutable and * updated get_signature_keys() to return a const reference, * get_signature_keys() will update signees on first call, * modified test cases and wallet.cpp accordingly, * no longer construct a new signed_transaction object before pushing * Added get_asset_count API * Allow sufficient space for new undo_session * Throw for deep nesting * No longer extract public keys before pushing a trx and removed unused new added constructor and _get_signature_keys() function from signed_transaction struct * Added cli_test to CI * use random port numbers in app_test (#154) * proposal fail_reason bug fixed (#157) * Added Sonarcloud code_quality to CI (#159) * Added sonarcloud analysis (#158) * fix for lottery end * fix declarations * fix declarations * fix boost integer * fix compilation * fix chain tests * fix app_test * try to fix cli test * fix incorrect max_depth param * working cli test * correct fc version
276 lines
9.8 KiB
C++
276 lines
9.8 KiB
C++
/*
|
|
* Copyright (c) 2015 Cryptonomex, Inc., and contributors.
|
|
*
|
|
* The MIT License
|
|
*
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
* of this software and associated documentation files (the "Software"), to deal
|
|
* in the Software without restriction, including without limitation the rights
|
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
* copies of the Software, and to permit persons to whom the Software is
|
|
* furnished to do so, subject to the following conditions:
|
|
*
|
|
* The above copyright notice and this permission notice shall be included in
|
|
* all copies or substantial portions of the Software.
|
|
*
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
|
* THE SOFTWARE.
|
|
*/
|
|
#include <boost/multi_index_container.hpp>
|
|
#include <boost/multi_index/ordered_index.hpp>
|
|
#include <boost/multi_index/hashed_index.hpp>
|
|
#include <boost/multi_index/member.hpp>
|
|
#include <boost/multi_index/mem_fun.hpp>
|
|
#include <boost/multi_index/tag.hpp>
|
|
|
|
#include <fc/io/raw.hpp>
|
|
#include <fc/io/raw_variant.hpp>
|
|
#include <fc/log/logger.hpp>
|
|
#include <fc/io/json.hpp>
|
|
|
|
#include <graphene/net/peer_database.hpp>
|
|
#include <graphene/net/config.hpp>
|
|
|
|
namespace graphene { namespace net {
|
|
namespace detail
|
|
{
|
|
using namespace boost::multi_index;
|
|
|
|
class peer_database_impl
|
|
{
|
|
public:
|
|
struct last_seen_time_index {};
|
|
struct endpoint_index {};
|
|
typedef boost::multi_index_container<potential_peer_record,
|
|
indexed_by<ordered_non_unique<tag<last_seen_time_index>,
|
|
member<potential_peer_record,
|
|
fc::time_point_sec,
|
|
&potential_peer_record::last_seen_time> >,
|
|
hashed_unique<tag<endpoint_index>,
|
|
member<potential_peer_record,
|
|
fc::ip::endpoint,
|
|
&potential_peer_record::endpoint>,
|
|
std::hash<fc::ip::endpoint> > > > potential_peer_set;
|
|
|
|
private:
|
|
potential_peer_set _potential_peer_set;
|
|
fc::path _peer_database_filename;
|
|
|
|
public:
|
|
void open(const fc::path& databaseFilename);
|
|
void close();
|
|
void clear();
|
|
void erase(const fc::ip::endpoint& endpointToErase);
|
|
void update_entry(const potential_peer_record& updatedRecord);
|
|
potential_peer_record lookup_or_create_entry_for_endpoint(const fc::ip::endpoint& endpointToLookup);
|
|
fc::optional<potential_peer_record> lookup_entry_for_endpoint(const fc::ip::endpoint& endpointToLookup);
|
|
|
|
peer_database::iterator begin() const;
|
|
peer_database::iterator end() const;
|
|
size_t size() const;
|
|
};
|
|
|
|
class peer_database_iterator_impl
|
|
{
|
|
public:
|
|
typedef peer_database_impl::potential_peer_set::index<peer_database_impl::last_seen_time_index>::type::iterator last_seen_time_index_iterator;
|
|
last_seen_time_index_iterator _iterator;
|
|
explicit peer_database_iterator_impl(const last_seen_time_index_iterator& iterator) :
|
|
_iterator(iterator)
|
|
{}
|
|
};
|
|
peer_database_iterator::peer_database_iterator( const peer_database_iterator& c ) :
|
|
boost::iterator_facade<peer_database_iterator, const potential_peer_record, boost::forward_traversal_tag>(c){}
|
|
|
|
void peer_database_impl::open(const fc::path& peer_database_filename)
|
|
{
|
|
_peer_database_filename = peer_database_filename;
|
|
if (fc::exists(_peer_database_filename))
|
|
{
|
|
try
|
|
{
|
|
std::vector<potential_peer_record> peer_records = fc::json::from_file(_peer_database_filename).as<std::vector<potential_peer_record> >( GRAPHENE_NET_MAX_NESTED_OBJECTS );
|
|
std::copy(peer_records.begin(), peer_records.end(), std::inserter(_potential_peer_set, _potential_peer_set.end()));
|
|
if (_potential_peer_set.size() > MAXIMUM_PEERDB_SIZE)
|
|
{
|
|
// prune database to a reasonable size
|
|
auto iter = _potential_peer_set.begin();
|
|
std::advance(iter, MAXIMUM_PEERDB_SIZE);
|
|
_potential_peer_set.erase(iter, _potential_peer_set.end());
|
|
}
|
|
}
|
|
catch (const fc::exception& e)
|
|
{
|
|
elog("error opening peer database file ${peer_database_filename}, starting with a clean database",
|
|
("peer_database_filename", _peer_database_filename));
|
|
}
|
|
}
|
|
}
|
|
|
|
void peer_database_impl::close()
|
|
{
|
|
std::vector<potential_peer_record> peer_records;
|
|
peer_records.reserve(_potential_peer_set.size());
|
|
std::copy(_potential_peer_set.begin(), _potential_peer_set.end(), std::back_inserter(peer_records));
|
|
|
|
try
|
|
{
|
|
fc::path peer_database_filename_dir = _peer_database_filename.parent_path();
|
|
if (!fc::exists(peer_database_filename_dir))
|
|
fc::create_directories(peer_database_filename_dir);
|
|
fc::json::save_to_file( peer_records, _peer_database_filename, GRAPHENE_NET_MAX_NESTED_OBJECTS );
|
|
}
|
|
catch (const fc::exception& e)
|
|
{
|
|
elog("error saving peer database to file ${peer_database_filename}",
|
|
("peer_database_filename", _peer_database_filename));
|
|
}
|
|
_potential_peer_set.clear();
|
|
}
|
|
|
|
void peer_database_impl::clear()
|
|
{
|
|
_potential_peer_set.clear();
|
|
}
|
|
|
|
void peer_database_impl::erase(const fc::ip::endpoint& endpointToErase)
|
|
{
|
|
auto iter = _potential_peer_set.get<endpoint_index>().find(endpointToErase);
|
|
if (iter != _potential_peer_set.get<endpoint_index>().end())
|
|
_potential_peer_set.get<endpoint_index>().erase(iter);
|
|
}
|
|
|
|
void peer_database_impl::update_entry(const potential_peer_record& updatedRecord)
|
|
{
|
|
auto iter = _potential_peer_set.get<endpoint_index>().find(updatedRecord.endpoint);
|
|
if (iter != _potential_peer_set.get<endpoint_index>().end())
|
|
_potential_peer_set.get<endpoint_index>().modify(iter, [&updatedRecord](potential_peer_record& record) { record = updatedRecord; });
|
|
else
|
|
_potential_peer_set.get<endpoint_index>().insert(updatedRecord);
|
|
}
|
|
|
|
potential_peer_record peer_database_impl::lookup_or_create_entry_for_endpoint(const fc::ip::endpoint& endpointToLookup)
|
|
{
|
|
auto iter = _potential_peer_set.get<endpoint_index>().find(endpointToLookup);
|
|
if (iter != _potential_peer_set.get<endpoint_index>().end())
|
|
return *iter;
|
|
return potential_peer_record(endpointToLookup);
|
|
}
|
|
|
|
fc::optional<potential_peer_record> peer_database_impl::lookup_entry_for_endpoint(const fc::ip::endpoint& endpointToLookup)
|
|
{
|
|
auto iter = _potential_peer_set.get<endpoint_index>().find(endpointToLookup);
|
|
if (iter != _potential_peer_set.get<endpoint_index>().end())
|
|
return *iter;
|
|
return fc::optional<potential_peer_record>();
|
|
}
|
|
|
|
peer_database::iterator peer_database_impl::begin() const
|
|
{
|
|
return peer_database::iterator(new peer_database_iterator_impl(_potential_peer_set.get<last_seen_time_index>().begin()));
|
|
}
|
|
|
|
peer_database::iterator peer_database_impl::end() const
|
|
{
|
|
return peer_database::iterator(new peer_database_iterator_impl(_potential_peer_set.get<last_seen_time_index>().end()));
|
|
}
|
|
|
|
size_t peer_database_impl::size() const
|
|
{
|
|
return _potential_peer_set.size();
|
|
}
|
|
|
|
peer_database_iterator::peer_database_iterator()
|
|
{
|
|
}
|
|
|
|
peer_database_iterator::~peer_database_iterator()
|
|
{
|
|
}
|
|
|
|
peer_database_iterator::peer_database_iterator(peer_database_iterator_impl* impl) :
|
|
my(impl)
|
|
{
|
|
}
|
|
|
|
void peer_database_iterator::increment()
|
|
{
|
|
++my->_iterator;
|
|
}
|
|
|
|
bool peer_database_iterator::equal(const peer_database_iterator& other) const
|
|
{
|
|
return my->_iterator == other.my->_iterator;
|
|
}
|
|
|
|
const potential_peer_record& peer_database_iterator::dereference() const
|
|
{
|
|
return *my->_iterator;
|
|
}
|
|
|
|
} // end namespace detail
|
|
|
|
peer_database::peer_database() :
|
|
my(new detail::peer_database_impl)
|
|
{
|
|
}
|
|
|
|
peer_database::~peer_database()
|
|
{}
|
|
|
|
void peer_database::open(const fc::path& databaseFilename)
|
|
{
|
|
my->open(databaseFilename);
|
|
}
|
|
|
|
void peer_database::close()
|
|
{
|
|
my->close();
|
|
}
|
|
|
|
void peer_database::clear()
|
|
{
|
|
my->clear();
|
|
}
|
|
|
|
void peer_database::erase(const fc::ip::endpoint& endpointToErase)
|
|
{
|
|
my->erase(endpointToErase);
|
|
}
|
|
|
|
void peer_database::update_entry(const potential_peer_record& updatedRecord)
|
|
{
|
|
my->update_entry(updatedRecord);
|
|
}
|
|
|
|
potential_peer_record peer_database::lookup_or_create_entry_for_endpoint(const fc::ip::endpoint& endpointToLookup)
|
|
{
|
|
return my->lookup_or_create_entry_for_endpoint(endpointToLookup);
|
|
}
|
|
|
|
fc::optional<potential_peer_record> peer_database::lookup_entry_for_endpoint(const fc::ip::endpoint& endpoint_to_lookup)
|
|
{
|
|
return my->lookup_entry_for_endpoint(endpoint_to_lookup);
|
|
}
|
|
|
|
peer_database::iterator peer_database::begin() const
|
|
{
|
|
return my->begin();
|
|
}
|
|
|
|
peer_database::iterator peer_database::end() const
|
|
{
|
|
return my->end();
|
|
}
|
|
|
|
size_t peer_database::size() const
|
|
{
|
|
return my->size();
|
|
}
|
|
|
|
} } // end namespace graphene::net
|