Test: Do Linux filesystems need defragmentation?
It was in a LXer discussion one of our readers suggested all those portage updates and temporary files in the compilation process lead to serious fragmentation of a Gentoo Linux system. That's true, when looking at the output of an 'emerge' process, there's a tremendous lot of files being created, copied or moved, and finally deleted. Not to speak about when the portage tree is updated. Probably, for normal usage, a Gentoo Linux system has one of the biggest numbers of this kind of file operations. Therefore, I figured, filesystems on a Gentoo Linux system were the ideal test to see if the Linux filesystems could live up to my expectations. However, since most people believe Linux filesystems don't need defragmentation, it was quite hard to find a tool which measures filesystem fragmentation. Nonetheless, I found one in the Gentoo forums. Here's the great part: You can use it yourself to measure filesystem fragmentation of your own filesystems!
I don't know under what license the content of Gentoo-forums is, so therefore I'll provide a short HOWTO. Credits go to Gentoo-user _droop_ for producing this perl script.
The argument should be the mount-point of one of your mounted filesystems if you want to know the fragmentation of a particular file system, like / or /usr or so. More interesting maybe is the ability to fill in any directory you like, for example I tested it on a directory where my P2P program occasionally puts some music; /home/kwint/music - only to find out my average mp3 in that dir consists of more than 50 fragments! It seems you must be root to do this scan, so I advise you to use sudo. A typical outcome would be:
$ sudo ./frag.pl /root
Now, let's look at some actual numbers, from my two Gentoo boxes. In both cases, the portage tree has been synchronized more than fifty times, and both boxes are in use for about one year and a half. The results are, less than 4% of the files on my /usr and /var partition are fragmented. On Gentoo, /var is where the actual compile takes place, once compiled Gentoo moves the files to /usr. The files over there have a maximum of consisting of an average of 1,11 fragments. I would consider that as a very neat score - especially since I use the -notail and -noatime options for ReiserFS. It seems those options make my ReiserFS more susceptible to file fragmentation. Back to my results, I'm sure the Windows defragmentation utility would say I don't need to defragment.
But what if I _would_ like to defragment my filesystem, would that be possible? Well, though there's no official 'compiled' application for it as far as I could find, you can try Con Kolivas' script. For example, my /home partition contains 13G of files. Some of them are rather large like one-Gigabyte VMWare image files and some Linux iso's, all of it is on ReiserFS again by the way. Results were really bad: 6 fragments per file on average. This seemed like a good testing ground for Con's script, so I gave it a spin.
The results were quite stunning. I first tested it on my /home/kwint/music directory, where the average number of fragments per file were the 50 I mentioned above. After running Con's script this was reduced to 1.1, and the percentage of non-contiguous files dropped from 70% to 10%! This took only seven seconds to complete, and only 21 large files were touched. For your information, I'm running 2x5400rpm PATA disks in software RAID0 by means of EVMS here. Since this was a success, I decided I might as well 'defragment' my whole /home partition. This time, over 6000 files had to be defragmented, which of course took far more time, 22 minutes eventually. It was worth it, while the number of fragmented files only dropped from 7% to 8% - that must be because only some large files are defragmented, the average number of fragments per file dropped from 6 to 1,5.
ConclusionThough I only tested two PC's, here's what I found: File fragmentation mainly happens in filesystems which contain some large files. You don't have to worry about your /usr or /var directory being fragmented, since they don't contain many large files. As the results show, it's worth the effort to try Con Kolivas defragmentation script.
Please, if you try the scripts share your findings and react in the threads below!
|Dec 12, 2007 11:39 AM
|I asked about this in 1995...
|Dec 11, 2007 6:35 AM
|Dec 10, 2007 8:38 PM
You cannot post until you login.