The Open Network archival node dump store

Dumps are created every sunday at about 18:00 UTC and can be found here

System Requirements

Archival node is heavy, you will need following hard/soft ware:
  1. As of 08.01.2024 node database is ~7TB in size when uncompressed and ~2.8TB when stored on lz4 compressed ZFS filesystem. Database growth depends on network activity, you should count with at least 100+GB (uncompressed) in a month, potentially much more
  2. As of 08.01.2024 validator-engine will open over 400k files when operating on archival node database. xBSD based systems, while generally very stable with validator-engine, will not be able to work due to hard 32767 files limit on rocksDB with xBSD

Restoring dump

Dumps come in form of ZFS Snapshots compressed using plzip, you need to install zfs on your host and restore the dump, see Oracle Documentation for more details. Before restoring we highly recommend to enable compression on parent ZFS filesystem, this will save you a lot of space.

Tell plzip to use as many cores as your machine allows to speed up extraction process (-n parameter). Another handy tool to use is pipe viewer utility. Here is example command to restore the dump directly from this server via curl:

curl -L -s https://archival-dump.ton.org/dumps/latest.zfs.lz | pv | plzip -d -n8 | zfs recv mypool/ton-work

Configuring DB

We assume that you already have a running TON Node that you wish to inject database into, once you complete restore copy over db/config.json, db/keyring as well as keys from your existing node into restored db path. Also, make sure that db directory on restored ZFS filesystem belongs to validator user. Obviously your validator-engine needs to be pointed to the new db directory as well. Make sure you start validator-engine with following params:
--state-ttl 315360000 --archive-ttl 315360000 --block-ttl 315360000
Please have patience once you start the node and observe the logs. Dump comes without dht caches, so it will take your node some time to find other nodes and then sync with them. Depending on age of snapshot your node might need from few hours to several days to catch up with network. This is normal.

Node maintenance

Node database requires cleansing from time to time (we advise once a week), to do so please perform following steps as root:

Troubleshooting and backups

If for some reason something does not work / breaks you can always roll back to @archstate snapshot on your ZFS filesystem, this is the original state from dump. If your node works well then you can remove this snapshot to save storage space, but we do recommend to regularly snapshot your filesystem for rollback purposes because validator node has been known to corrupt data as well as config.json in some cases. zfsnap is a nice tool to automate snapshot rotation.