- Mar 18, 2025
-
- Nov 06, 2024
-
-
Shorten HAF Trigger Names for PostgreSQL Limit Adding the hive_data schema caused an unexpected issue for HAF applications. The names of triggers and their handling functions, automatically generated by HAF from the schema and application table names, became 5 characters longer (sch ema name changed from 'hive' to 'hive_data'). This caused some identifiers to exceed the 64-character limit imposed by PostgreSQL. To maintain compatibility, the hive_Data schema was renamed to hafd, to have same length as 'hive'
-
during update schema hive is removed togehther with all its objects, then its is recreated with a new hfm version. This way annoying problem with overriding function with similar arguments and then ambiguity in runtime is solved. Schema hive_update for object created donly for updating purpose is added
-
- Aug 07, 2024
-
-
Konrad Botor authored
-
Konrad Botor authored
-
- Dec 06, 2023
-
-
Marcin authored
-
Marcin authored
it is possible to fill haf tables not from the first block but from a given block. This will allow to safe storage for application which don't need historical data befe their start in the blockchain. A new sql-serializer parameter was introduced: psql-first-block, to choose first block which needs to be synced.
-
Marcin authored
Currently, FKs are set with the 'NOT VALID' option. Consequently, dropping and restoring them does not take a long time. This means that the main reason for adding 'psql-livesync-threshold' no longer exists. The only reason the option is not removed altogether is that it allows for easy starting of the HAF in a LIVE state, which is crucial for tests.
-
- Mar 24, 2023
-
-
Add `op_body_filter` tool so as to check regexes Processing options in a filter is wrapped by `try-catch`
-
- Feb 17, 2023
-
-
Marcin authored
It has turned out, that in practice psql-livesync-threshold default value was to small and time of recreating FK reduces effects of dumping blocks data using many threads.
-
- Oct 15, 2022
-
-
Bartek Wrona authored
-
- Jun 06, 2022
-
-
Marcin Sobczyk authored
-
- Jan 20, 2022
-
-
Marcin authored
-
- Jan 19, 2022
-
-
Marcin authored
It turned out that indexes to much slow down P2P sync
-
Dan Notestein authored
-
- Jan 17, 2022
-
-
Marcin authored
'START' state is the start state of synchronization. Inside this state decision is made in which state synchronization muste go next. During this state synchronization is not synchronizing blocks, only wait for event which claryfy what is the most optimal next state. 'START' state solves the problem with breaking LIVE synchronization with CTRL+C: previously at the begining synchronization was started in P2P state, what fore to disable foreign key, and in consequence the FK have to be restored in the next state what takes a lot of time. Now, during leaving the 'START' state decision is made if is better go to LIVE or P2P/REINDEX. Decision is made on expected number of blocks to sync to reach the network HEAD BLOCK. A new parameter 'psql-livesync-threshold' was added to sql_serializer: if number of expected blocks is less or equal than the value of the threshold then synchronization will move to live sync, otheriwse to it will move to P2P/REINDEX.
-
Marcin authored
New class to manipulation of indexes and contraints added: indexes_controller indexation_state enable/disable indexes and foreign key depends on its state When the node is broken with CTRL+C then nothing happen with the indexes P2P sync works with enabled indexes and disabled FK REINDEX sync works with disabled indexes and disabled FK LIVE sync works with enabled indexes and enabled FK
-
Dan Notestein authored
-
- Jan 16, 2022
-
-
Dan Notestein authored
-
- Jan 11, 2022
-
-
Marcin authored
- one common installtion itruction for whole HAF - sql_serializer parameter description - cmake targets types - sql_serialzier: dumping blocks to the datatabase
-
- Jan 04, 2022
-
-
Marcin authored
The table hive.irreversible_data was extended to column dirty (boolean). When dirty is TRUE it means that irreversible data are in inconsistent state. When the HAF will crash during sync (replay or p2p sync) then data are inconsistant: some dumping data thread finish their jobs and some do not. When node after crash will start, then it checks the dirty flag, if flag is set, then inconsistent data have to be removed but it may last very long time, because during replay there are no indexes to make inserting new rows faster. During restarting the HAF after the crash an error is returned with informations about datat inconistency. To succesfuly restart the node the switch `--psql-force-open-inconsistent` must be used.
-
- Nov 22, 2021
-
-
Marcin authored
new sql_serializer option psql-enable-accounts-dump the defualt is true - means accounts and account_operation are dumped
-
- Oct 22, 2021
-
-
Marcin authored
-
- Oct 21, 2021
- Oct 19, 2021
-
-
Mariusz Trela authored
-
Mariusz Trela authored
-
- Sep 03, 2021
- Jun 22, 2021
-
-
Marcin authored
-
- May 31, 2021
- May 25, 2021
-
-
Marcin authored
-
- Apr 29, 2021
-
-
Marcin authored
-
- Apr 15, 2021
-
-
Marcin authored
-
- Apr 12, 2021
- Mar 30, 2021