- Mar 18, 2025
-
- Mar 17, 2025
-
- Mar 14, 2025
- Mar 11, 2025
-
-
Dan Notestein authored
Handle cases where current block being processed is only in the blocks_reversible table (avoid returning 0 for a block).
-
- Mar 04, 2025
-
-
Dan Notestein authored
-
- Mar 03, 2025
-
-
Dan Notestein authored
- hive: develop (4cd3fc3d23f4074be91a11f00a7a6035405faaf8)
-
- Feb 21, 2025
-
-
Konrad Botor authored
-
Marek Kochanowicz authored
-
- Feb 10, 2025
-
-
Konrad Botor authored
-
- Feb 06, 2025
-
-
Dan Notestein authored
-
- Jan 30, 2025
-
-
Dan Notestein authored
autovacuum settings that can only be set in postgres.conf, reduce default work_mem to 64MB, plus small tweaks
-
- Jan 28, 2025
-
-
Konrad Botor authored
-
- Jan 24, 2025
-
-
Konrad Botor authored
-
- Jan 17, 2025
-
-
Marcin authored
-
- Jan 15, 2025
-
-
Dan Notestein authored
- hive: develop (74eb54442330ace71c37a43b464aee6b1bd4dae2)
-
- Jan 14, 2025
-
-
Marcin authored
previously only state providers shadow tables were excluded, what cause problems when haf with installed context with registered tables was updated.
-
- Jan 09, 2025
-
-
Marcin authored
-
Marcin authored
-
Marcin authored
-
Marcin authored
-
Marcin authored
Currently state providers creates tables in hafd schema. All tables in hafd schemas are taken to db hash computation, but hashes for state providers are computed differently, ans should not affect hafd hash schema.
-
Marcin authored
-
Marcin authored
After many changes hash computed on databse could only be used to check if hfm can be updated to a given new version. The hash cannot be used to check if the database schema for a given hfm version was modified, becuase it does not take all haf elements for computation. It means there is no need to have stored hash of the databas because it has no usage, moreover it is misleading and could mask that the some parts of the schema was modified. Warning: the change modifies hafd schema, what means that old hfm versions cannot be updated to it.
-
Marcin authored
-
Marcin authored
-
Marcin authored
-
Marcin authored
-
Marcin authored
Newer hash computation method is injected into the db with older hfm version. Both naked new HAF db and old one use the same algorithm to getting hash.
-
Marcin authored
Now there is no need to extend list of hashed tables each time when new table is added to schema hafd. WARNING: previously people forget to extend the list and there are few tables which were not hashed. Because of this update from previous version of HAF is impossible
-
Marcin authored
There is no sense to pass schema parameter to hash db computation, at the end there is a list of tables to take into computation and all of them belongs to hafd schema.
-
Konrad Botor authored
-
- Jan 03, 2025
-
-
Dan Notestein authored
- hive: develop (a181ebfab21471951a369b138adb2f5003d7a642)
-
- Jan 02, 2025
- Dec 23, 2024
-
-
Dan Notestein authored
- hive: develop (015c4c0dc6b76d8b256363335609c5eadbcfaaeb)
-
- Dec 22, 2024
-
-
Dan Notestein authored
This reverts commit e9fa365a
-