AUG
19
2006
|
Why does Linux need defragmenting?This so often repeated myth is getting so old and so boring. And untrue. Linux doesn't need defragmenting, because its filesystem handling is not so stupid like when using several decades old FAT. Yadda yadda, blah blah. Now, the real question is: If Linux really doesn't need defragmenting, why does Windows boot faster and why does second startup of KDE need only roughly one quarter of time the first startup needs? Ok, first of all, talking about defragmenting is actually wrong. Defragmenting is making sure no file is fragmented, i.e. that every file is just one contiguous area of the disk. But do you know any today's application that reads just one file? The thing that should be talked instead should be linearizing, i.e. making sure that related files (not one, files) are one contiguous area of the disk. Just in case you don't know, let me tell you one thing about your the thing busily spinning in your computer: It's very likely it can without trouble read 50M or more in a second - if it's a block read of contiguous data. However, as soon as it actually has to seek in order to access data scattered in various areas of the disk, the reading performance can suddenly plummet seriously - only bloody fast drives today have average seek time smaller than 10ms, and your drive is very likely not one of them. Now do the maths, how many times do 10ms (or more) fit in one second? Right, at most 100 times. So your drive can on average read at most 100 files a second, and that's actually ignoring the fact that reading a file usually means more than just a single seek (on the other hand that's also ignoring the drive's built-in cache that can avoid some seeks). Some of the pictures explaining how Linux doesn't need defragmentation actually nicely demonstrate that with files scattered so much the disk simply has to seek. Now, again, how many files does an average application open during startup? One? It's actually hundreds, usually, at least. And since Linux kernel (AFAIK) at the present time has next to none support for linear reading of several files, you can guess what happens. Indeed, kernel developers will undoubtedly tell you that it's the applications' fault and that they shouldn't be using so many files, but then kernel developers often have funny ideas about how userspace should work and seriously, why do we have filesystems if they're not to be used and applications should compress all their data into a single file? For people who don't know about this (and most don't, actually) it feels kind of natural to structure data into files. Nothing is perfect and just blaming kernel developers for this wouldn't be quite fair, but then it sometimes can really upset me when I see people "fixing" problems by claiming they don't exist. I am a KDE developer, not a kernel developer, so it may very well be that some of what I've written above is wrong, but the single fact that the problem exist can be easily be proved even by you: Boot your computer, log into KDE, wait for the login to finish. Log out. Log in again. Even if you use a recent distribution that may use some kind of a preload technique that reduces this problem, there should be still a visible difference. And the only difference is that the second time almost everything is read from kernel's disk caches instead of the disk itself. Which avoids reading of the data and which avoids seeking. And the difference is the seeking, not the reading of the data: KDE during startup should be very unlikely to read more than 100M of data and that's 2 seconds with 50M/s disks - is the difference really only 2 seconds for you? I don't think so. So, who still believes this myth that everything in the land of Linux filesystems is nice and perfect? Fortunately, some kernel developers have started investigating this problem and possible solutions.
|
![]() |
Comments
Yes, agreed. Furthermore,
Yes, agreed. Furthermore, some people (like me) who download lots of anime have a harddisk that's always almost full. Right now it's at 69% but last week it was at 90%. And the filesystem becomes fragmented when usage is over 80%.
I wonder why nobody has even tried to write a decent defragmentation tool yet.
Because...
...defragmentation in Unix and alikes only contributes to exacerbate the problem. I'm not kidding. If things are seriously fragmented in your disk (which should only pose a problem above 90%, or else change your filesystem, dude, which you can do) defragmenting it will only take ages and leave you with a defragmented filesystem that will become extremely fragmented again, in a short amount of time, as you keep adding files to your disk (especially small ones like, say. the KDE Konqueror cache, so it's kind of unavoidable).
re: especially small ones
>(especially small ones like, say. the KDE Konqueror cache, so it's kind of unavoidable).
Define small. ;) Single files fitting into one block each can't fragment. And to do something about keeping related files as contiguous as possible, the kernel would need to gather statistical data about which files get regularly used together by processes (mabybe even in which order), how often these files change their size (!=modified), etc. and write special set/sequence properties to the files, the file systems can make use of. I'd think that such sort of profiling would be rather expensive.
Is profiling expensive ?
Why could we not compile a kernel with profiling enabled, use it for some days, in order to record profiling data about our 'average file use', and then disable profiling from the kernel, and then use the profiling data for a long time ?
(The idea isn't a new one. It's very near from the way fcache do the job.)
> which should only pose a
> which should only pose a problem above 90%
My usage *is* above 90% very frequently, and I'm not the only one. Think of the people who download lots of anime.
> or else change your filesystem, dude, which you can do
You talk as if that's very easy. I have to backup 100 GB (yes I mean it) worth of data! Come on, I have better things to do than wasting a day like that.
Here is a try
>> I wonder why nobody has even tried to write a decent defragmentation tool yet.
You may be interested by this : http://vleu.net/shake/
(i'm not sure you'll think it's 'decent', but you asked for a try)
The autor gave some more explanations here : http://forums.gentoo.org/viewtopic-t-463204-highlight-shake.html
He seems to take 'distance between two files used at the same time' into consideration, so it may be a try in the good direction.
I disagree partially
And the full story is here: About boot time optimization in Linux
What about dog slow init-systems?
One possible solution (as you already know) may fcache (http://lkml.org/lkml/2006/5/15/46) for kernel space. Currently it works well but also has some limitations (supports only ext3).
But i think main problems are init-systems not disk defragmentation or drives seek time.
When we started to investigate what causing this so long boot times we found there is tons of realy ugly/unmaintained code laying on nearly every init system. And also coldplug/hotplug systems are really slow for same reason.
So we decided to give it a try with a high level language (in our case its python) instead of awk+sed+etc/bash or C. We start to change our init and cold/hotplug systems and here is the results;
http://cekirdek.uludag.org.tr/~caglar/blog/?file=mudur.blog
Pardus 1.1 alpha2 boots nearly in 16 seconds on my sony laptop (ide disk, 3400 rpm ) (our old init takes ~ 1:35), and with help of fcache logging into KDE takes 2 seconds most.
So i think there are some progress going on waiting to used by distros.
But init-related optimizations are part of the story
and the other part, summarized in one sentence, is this: why do I have to wait up to 60 seconds for KWrite to start on my computer, when I have 768 MB of RAM? Sure, I got a ton of apps that I use and keep open all the time, but why does everything in RAM get pushed to disk if it hasn't been used in a couple minutes? And, more importantly, what is KWrite reading from the frigging disk that it takes so long to start up? It's a DAMN TEXT EDITOR!
60 seconds to start?
There must be something wrong about you set up. I have 512 MB of rams
[email protected] ~ $ time kwrite
real 0m1.342s
user 0m0.820s
sys 0m0.176s
So, starting kwrite is much faster for me on a system with less ram. I think your kde is screwed.
Pages