Attention
The restore from dumps method for archival nodes has been superseded by the restore from archive packages method, which is integrated into the installer process of mytonctrl. If you want to create a new archival node, please use the mytonctrl installer and start it without parameters to start the interactive setup process.
However, restoring from a fresh dump is still by far the fastest method of creating an archival node, so we do provide dumps here from time to time when required.
Click here to access the dumps.
System Requirements
Archival node is heavy, you will need following:
- 16 CPU Cores dedicated to validator engine
- At least 128 Gig of memory
- At least 16TB of SSD storage, 20TB if you will not use ZFS with compression
- Symmetric gigabit internet link
- Linux OS with open files limit above 700k
Contents of dumps
Dumps contain relevant archival data from db path of TON node.
Dump files format and structure
Dumps are provided in zfs snapshot files compressed using zstd.
Each dump contains full zfs dump of ton archival node database, the masterchain tip seqno within this dump is included in filename. For example: mainnet_full_42847499.zfs.zstd.
Important notes
- Before you proceed with restoring the archival dump make sure that you have sufficient disk storage available to do so, to check current disk requirements please see contents fs_size file for dump you wish to restore, the value is in bytes. As of 18.10.2025 you need 16TB of storage to restore the dump.
- You need to install zfs on your host and restore the dump, see Oracle Documentation for more details.
- Before restoring we highly recommend enabling compression on parent ZFS filesystem, this will save you a lot of space.
- You will also need zstd and pv utilities.
Restoring dump
Example command: curl -L -s https://archival-dump.ton.org/dumps/mainnet_full_41847499.zfs.zstd | pv | zstd -d -T16 | zfs recv mypool/ton-db
Remove unnecessarily zfs snapshots
Each dump restoration will add a snapshot to your target FS, we advise you to remove them latest after you have launched your node otherwise your disk space usage will skyrocket.
Configuring DB
We assume that you already have a running TON Node that you wish to inject database into, once you complete restore copy over config.json and keyring from your existing node db into restored db path. Also, make sure that data on restored ZFS filesystem belongs to user you run your node under. Obviously, your validator-engine needs to be pointed to the new db directory as well.
Make sure you start validator-engine with following params: --state-ttl 3153600000 --archive-ttl 3153600000
Please have patience once you start the node and observe the logs. Dump comes without dht caches, so it will take your node some time to find other nodes and then sync with them. Depending on age of snapshot your node might need from few hours to several days to catch up with network. This is normal.
Node maintenance
Node database requires cleansing from time to time (we advise once a week), to do so please perform following steps as root: