Click here to access the dumps.
System Requirements
Archival node is heavy, you will need following:
- 16 CPU Cores dedicated to validator engine
- At least 128 Gig of memory
- At least 12TB of SSD storage, double this if you will not use ZFS with compression1
- Symmetric gigabit internet link
- Linux OS with open files limit above 400k2
Contents of dumps
Dumps contain relevant archival data from db path of TON node.
Dump files format and structure
Dumps are provided in zfs snapshot files compressed using zstd.
Each dump contains full zfs dump of ton archival node database, the masterchain tip seqno within this dump is included in filename. For example: mainnet_full_42847499.zfs.zstd at the moment creation of full dumps is initialized every Sunday at 02:30 UTC, creation of dump takes about 12 hours.
Important notes
- Before you proceed with restoring the archival dump make sure that you have sufficient disk storage available to do so, to check current disk requirements please see contents fs_size file for dump you wish to restore, the value is in bytes. As of 19.12.2024 you need 10.9TB of storage to restore the dump.
- You need to install zfs on your host and restore the dump, see Oracle Documentation for more details.
- Before restoring we highly recommend enabling compression on parent ZFS filesystem, this will save you a lot of space.
- You will also need zstd and pv utilities.
Restoring dump
Example command: curl -L -s https://archival-dump.ton.org/dumps/mainnet_full_41847499.zfs.zstd | pv | zstd -d -T16 | zfs recv mypool/ton-db
Remove unnecessarily zfs snapshots
Each dump restoration will add a snapshot to your target FS, we advise you to remove them latest after you have launched your node otherwise your disk space usage will skyrocket.
Configuring DB
We assume that you already have a running TON Node that you wish to inject database into, once you complete restore copy over config.json and keyring from your existing node db into restored db path. Also, make sure that data on restored ZFS filesystem belongs to user you run your node under. Obviously, your validator-engine needs to be pointed to the new db directory as well.
Make sure you start validator-engine with following params: --state-ttl 3153600000 --archive-ttl 3153600000
Please have patience once you start the node and observe the logs. Dump comes without dht caches, so it will take your node some time to find other nodes and then sync with them. Depending on age of snapshot your node might need from few hours to several days to catch up with network. This is normal.
Node maintenance
Node database requires cleansing from time to time (we advise once a week), to do so please perform following steps as root: