dd and fallocate and truncate to make big files quick
##########################################
Every example below will be with creating a 10 gig file. dd will take time, where as fallocate and truncate will be instantenous.
NOTE: dd if=/dev/zero of=10gigfile bs=1M count=10240 will create a 10 gig file but it will take time to write 10 gigs worth of zeros (at 1 MiB block size, remember 1M with dd is 1024*1024 bytes, so 10 Gig is 1024*1024*1024*10, thats why I put a count of 10240)
Make 10 gig file
# fallocate -l 10G 10gig
# ls -lisah | grep 10gig
267 10G -rw-r–r– 1 root root 10G Apr 14 11:03 10gig
Add 10 more gigs to the end of the 10 gig file (the -o is the offset, or the start point of the fallocate, so it fallocates 10 gigs from the 10 gig point)
# fallocate -o 10G -l 10G 10gig
# ls -lisah | grep 10gig
267 20G -rw-r–r– 1 root root 20G Apr 14 11:03 10gig
Now we have a 20 Gig file, lets make add 10 gigs, from the 15 gig point. So it will only grow by 5 gig. (only 5 new gigs are in that area)
# fallocate -o 15G -l 10G 10gig
# ls -lisah | grep 10gig
267 25G -rw-r–r– 1 root root 25G Apr 14 11:03 10gig
Truncate can be used to make similar files:
Make a 10 gig file
# truncate -s 10G tengig
# ls -lisah | grep tengig
268 0 -rw-r–r– 1 root root 10G Apr 14 11:05 tengig
Add 10 gigs to that file (to the end of the file):
# truncate -s +10G tengig
# ls -lisah | grep tengig
268 0 -rw-r–r– 1 root root 20G Apr 14 11:05 tengig
From “Victor” on Stackoverflow:
Link: http://stackoverflow.com/questions/257844/quickly-create-a-large-file-on-a-linux-system
This.. will create a 10 M file instantaneously (M stands for 1024*1024 bytes, MB stands for 1000*1000 – same with K, KB, G, GB…)
EDIT: as many have pointed out, this will not physically allocate the file on your device. With this you could actually create an arbitrary large file, regardless of the available space on the device
So, when doing this, you will be deferring physical allocation until the file is accessed. If you’re mapping this file to memory, you may not have the expected performance.
From “Dan McAllister” on Stackoverflow:
Same link: http://stackoverflow.com/questions/257844/quickly-create-a-large-file-on-a-linux-system
This is a common question — especially in today’s environment of virtual environments. Unfortunately, the answer is not as straight-forward as one might assume.
dd is the obvious first choice, but dd is essentially a copy and that forces you to write every block of data (thus, initializing the file contents)… And that initialization is what takes up so much I/O time. (Want to make it take even longer? Use /dev/random instead of /dev/zero! Then you’ll use CPU as well as I/O time!) In the end though, dd is a poor choice (though essentially the default used by the VM “create” GUIs).
truncate is another choice — and is likely the fastest… But that is because it creates a “sparse file”. Essentially, a sparse file is a section of disk that has a lot of the same data, and the underlying filesystem “cheats” by not really storing all of the data, but just “pretending” that it’s all there. Thus, when you use truncate to create a 20 GB drive for your VM, the filesystem doesn’t actually allocate 20 GB, but it cheats and says that there are 20 GB of zeros there, even though as little as one track on the disk may actually (really) be in use.
fallocate is the final — and best — choice for use with VM disk allocation, because it essentially “reserves” (or “allocates” all of the space you’re seeking, but it doesn’t bother to write anything. So, when you use fallocate to create a 20 GB virtual drive space, you really do get a 20 GB file (not a “sparse file”, and you won’t have bothered to write anything to it — which means virtually anything could be in there — kind of like a brand new disk!)