The Open Network archival node dump store

Click here to access the dumps.

System Requirements

Archival node is heavy, you will need following:

Contents of dumps

Dumps contain relevant archival data from db path of TON node.

Dump files format and structure

Dumps are provided in zfs snapshot files compressed using zstd. Each dump contains full zfs dump of ton archival node database, the masterchain tip seqno within this dump is included in filename. For example: mainnet_full_42847499.zfs.zstd at the moment creation of full dumps is initialized every Sunday at 02:30 UTC, creation of dump takes about 12 hours.

Important notes

Restoring dump

Example command:
curl -L -s https://archival-dump.ton.org/dumps/mainnet_full_41847499.zfs.zstd | pv | zstd -d -T16 | zfs recv mypool/ton-db

Remove unnecessarily zfs snapshots

Each dump restoration will add a snapshot to your target FS, we advise you to remove them latest after you have launched your node otherwise your disk space usage will skyrocket.

Configuring DB

We assume that you already have a running TON Node that you wish to inject database into, once you complete restore copy over config.json and keyring from your existing node db into restored db path. Also, make sure that data on restored ZFS filesystem belongs to user you run your node under. Obviously, your validator-engine needs to be pointed to the new db directory as well. Make sure you start validator-engine with following params:
--state-ttl 3153600000 --archive-ttl 3153600000
Please have patience once you start the node and observe the logs. Dump comes without dht caches, so it will take your node some time to find other nodes and then sync with them. Depending on age of snapshot your node might need from few hours to several days to catch up with network. This is normal.

Node maintenance

Node database requires cleansing from time to time (we advise once a week), to do so please perform following steps as root: