diff --git a/README.md b/README.md
index 34fc482966285e736492fae5fff0bfe00c3e9606..3cb89c4393bdc445ca89eca20f27a54c68824894 100644
--- a/README.md
+++ b/README.md
@@ -20,8 +20,8 @@ The image above shows the main components of a HAF installation:
   sql_serializer is the hived plugin which is responsible for pushing the data from blockchain blocks into the HAF database. The plugin also informs the database about the occurrence of microforks (in which case HAF has to revert database changes that resulted from the forked out blocks). It also signals the database when a block has become irreversible (no longer revertable via a fork), so that the info from that block can be moved from the "reversible" tables inside the database to the "irreversible" tables.
   Detailed documentation for the sql_serializer is here: [src/sql_serializer/README.md](./src/sql_serializer/README.md)
 * **PostgreSQL database**
-  A HAF database contains data from blockchain blocks in the form of SQL tables (these tables are stored in the "hive" schema inside the database), and it also contains tables for the data generated by HAF apps running on the HAF server (each app has its own separate schema to encapsulate its data). The system utilizes Postgres authentication and authorization mechanisms to protect HAF-based apps from interfering with each other.
-* **HIVE FORK MANAGER** is a PostgreSQL extension that implements HAF's API inside the "hive" schema. This extension must be included when creating a new HAF database. This extension defines the format of block data saved in the database. It also defines a set of SQL stored procedures that are used by HAF apps to get data about the blocks. The SQL_SERIALIZER dumps blocks to the tables defined by the hive_fork_manager. This extension defines the process by which HAF apps consume blocks, and ensures that apps cannot corrupt each other's data. The hive_fork_manager is also responsible for rewinding the state of the tables of all the HAF apps running on the server in the case of a micro-fork occurrence. Detailed documentation for hive_fork_manager is here: [src/hive_fork_manager/Readme.md](./src/hive_fork_manager/Readme.md)
+  A HAF database contains data from blockchain blocks in the form of SQL tables (these tables are stored in the "hafd" schema inside the database), and it also contains tables for the data generated by HAF apps running on the HAF server (each app has its own separate schema to encapsulate its data). The system utilizes Postgres authentication and authorization mechanisms to protect HAF-based apps from interfering with each other.
+* **HIVE FORK MANAGER** is a PostgreSQL extension that implements HAF's API inside the "hive" schema. This extension must be included when creating a new HAF database. This extension defines the format of block data saved in the database. It also defines a set of SQL stored procedures that are used by HAF apps to get data about the blocks. The SQL_SERIALIZER dumps blocks to the tables defined by the hive_fork_manager in 'hafd' schema. This extension defines the process by which HAF apps consume blocks, and ensures that apps cannot corrupt each other's data. The hive_fork_manager is also responsible for rewinding the state of the tables of all the HAF apps running on the server in the case of a micro-fork occurrence. Detailed documentation for hive_fork_manager is here: [src/hive_fork_manager/Readme.md](./src/hive_fork_manager/Readme.md)
 
 # HAF server quickstart
 
diff --git a/src/hive_fork_manager/Readme.md b/src/hive_fork_manager/Readme.md
index 602a6a26ab71c94c43a9c21c611cc8ff8fbb70ce..19843e4d02658c3b02ce104ff6e123b28aeead5b 100644
--- a/src/hive_fork_manager/Readme.md
+++ b/src/hive_fork_manager/Readme.md
@@ -109,7 +109,7 @@ all blocks in a batch are fully processed AND the current_block_number has been
 #### Using a group of contexts
 In certain situations, it becomes necessary to ensure that multiple contexts are synchronized
 and point to the same block. This synchronization of contexts allows for consistent behavior
-across different applications. To achieve this, there are specific functions available, such as 'hive.app_next_block',
+across different applications. To achieve this, there are specific functions/procedures available, such as 'hive.app_next_iteration',
 that operate on an array of contexts and move them synchronously.
 
 When using synchronized contexts, it is of utmost importance to ensure that all the contexts within a group
@@ -235,12 +235,12 @@ For example, some apps perform irreversible external operations such as a transf
 
 Other apps require very high performance, and don't want to incur the extra performance overhead associated with maintaining the data required to rollback blocks in the case of a fork. In such case, it may make sense to trade off the responsiveness of presenting the most recent blockchain data in order to create an app that can respond to api queries faster and support more users.
 
-HAF distinguish which appl will only traverse irreversible block data. This means that calls to `hive.app_next_block` will return only the range of irreversible blocks which are not already processed or NULL (blocks that are not yet marked as irreversible will be excluded). Similarly, the set of views for an irreversible context only deliver a snapshot of irreversible data up to the block already processed by the app.
+HAF distinguish which appl will only traverse irreversible block data. This means that calls to `hive.app_next_iteration` will return only the range of irreversible blocks which are not already processed or NULL (blocks that are not yet marked as irreversible will be excluded). Similarly, the set of views for an irreversible context only deliver a snapshot of irreversible data up to the block already processed by the app.
 The user needs to decide if an application is non-forking, he can do this during creation af a context with 'hive.app_create_context' and passing an argument
 '_is_forking' = FALSE.
 
 It is possible to change an already created context from non-forking to forking and vice versa with methods
-`app_context_set_non_forking(context_name)` and `hive.app_context_set_forking(context_name)`
+`hive.app_context_set_non_forking(context_name)` and `hive.app_context_set_forking(context_name)`
 
 :warning: **Switching from forking to non-forking appl will delete all its reversible data**