|
|
Subscribe / Log in / New account

Is pre-linking worth it?

By Jake Edge
July 15, 2009

The recent problem with prelink in Fedora Rawhide has led some to wonder about what advantages pre-linking actually brings—and whether those advantages outweigh the pain it can cause. Pre-linking can reduce application start up time—and save some memory as well—but there are some downsides; not least the possibility of an unbootable system as some Rawhide users encountered. The advantages are small enough, or hard enough to completely quantify, that it leads to questions about whether it is justified as the default for Fedora.

Linux programs typically consist of a binary executable file that refers to multiple shared libraries. These libraries are loaded into memory once and shared by multiple executables. In order to make that happen, the dynamic linker (i.e. ld.so) needs to change the binary in memory such that any addresses of library objects point to the right place in memory. For applications with many shared libraries—GUI programs for example—that process can take some time.

The idea behind pre-linking is fairly simple: reduce the amount of time the dynamic linker needs to spend doing these address relocations by doing it in advance and storing the results. The prelink program processes ELF binaries and shared libraries in much the same way that ld.so would, and then adds special ELF sections to the files describing the relocations. When ld.so loads a pre-linked binary or library, it checks these sections and, if the libraries are loaded at the expected location and the library hasn't changed, it can do its job much more quickly.

But there are a few problems with that approach. For one thing, it makes the location of shared libraries very predictable. One of the ideas behind address space layout randomization (ASLR) is to randomize these locations each time a program is run—or library loaded—so that malicious programs cannot easily and reproducibly predict addresses. On Fedora and Red Hat Enterprise Linux (RHEL) systems, prelink is run every two weeks with a parameter to request random addresses to alleviate this problem, but they do stay fixed over that time period.

In addition, whenever applications or libraries are upgraded, prelink must be run again. The linker is smart enough to recognize the situation and revert to its normal linking process when something has changed, but the advantage that prelink brings is lost until the pre-linking is redone. Also, the kernel randomly locates the VDSO (virtual dynamically-linked shared object) "library", which, on 32-bit systems, can overlap one of the libraries, requiring some address relocation anyway. Overall, pre-linking is a bit of a hack, and it is far from clear that its benefits are substantial enough to overcome that.

Fedora and Red Hat Enterprise Linux (RHEL) enable pre-linking by default, while most other distributions make prelink available, but seem unconvinced that the benefits are substantial enough to make it the default. Because it is a very system-dependent feature, hard performance numbers are difficult to find. It certainly helps in some cases, but is it really something that everyone needs?

Matthew Miller brought that question up on the fedora-devel mailing list:

I see [prelink] as adding unnecessary complexity and fragility, and it makes forensic verification difficult. Binaries can't be verified without being modified, which is far from ideal. And the error about dependencies having changed since prelinking is disturbingly frequent.

On the other hand, smart people have worked on it. It's very likely that those smart people know things I don't. I can't find any good numbers anywhere demonstrating the concrete benefits provided by prelink. Is there data out there? [...]

Even assuming a benefit, the price may not be worth it. SELinux gives a definite performance hit, but it's widely accepted as being part of the price to pay for added security. Enabling prelink seems to fall on the other side of the line. What's the justification?

Glibc maintainer Ulrich Drepper noted that pre-linking avoids most or all of the cost of relocations, while also pointing out that the relatively new symbol table hashing feature in GCC reduces the gain for pre-linking. He also described an additional benefit: memory pages that do not require changes for relocations will not be copied (due to copy-on-write) and can thus be shared between multiple processes running the same executable. But his primary motivation may have more to do with his work flow: "Note, also small but frequently used apps benefit. I run gcc etc a lot and like every single saved cycle."

The effect of pre-linking can be measured by using the LD_DEBUG environment variable as Drepper described. Jakub Jelinek, who is the author of prelink, posted some results for OpenOffice.org Writer showing an order of magnitude difference in the amount of time spent doing relocations between pre-linked and regular binaries. Those results are impressive, but, at least for long-running programs, start up time doesn't really dominate—desktop applications, or often-used utilities, are the likely benefactors. As Miller puts it:

If I can get a 50% speed up to a program's startup times, that sounds great, but if I then leave that program running for days on end, I haven't actually won very much at all -- but I still pay the price continuously. (That price being: fragility, verifiability, and of course the prelinking activity itself.)

For 32-bit processors, though, which are those most likely to benefit from the memory savings, there is still the VDSO overlap problem. John Reiser did an experiment using cat and found that glibc needed to be dynamically relocated fairly frequently:

This means that glibc must be dynamically relocated about 10% of the time anyway, even though glibc has been pre-linked, and even though /bin/cat is near minimal in its use of shared libraries. When a GNOME app uses 50 or more pre-linked shared libs, as claimed in another thread on this subject, then runtime conflict and expense are even more likely.

There doesn't seem to be a large interest in removing the prelink default for Fedora, but one has to wonder, if the savings are as large and widespread as people seem to think, why other distributions have been reluctant to adopt it. Part of the reason may be the possibility of a prelink bug rendering systems unbootable or reluctance to rely upon something that requires modifying binaries and libraries, regularly, to keep everything in sync. The security issues may also play into their thinking, though Jelinek argues that security-sensitive programs should be position-independent executables (PIE) that are not pre-linked, and thus have ASLR done for every execution.

While not impossible, a problem like Rawhide suffered seems unlikely to occur in more polished, non-development releases. Though prelink does provide a benefit, it may be a bit hard to justify as time goes on. For some, who are extremely sensitive to start up time costs, it may make a great deal of sense, but it may well be that for the majority of users, the annoyance and dangers are just not worth it.



to post comments

Is pre-linking worth it?

Posted Jul 15, 2009 17:06 UTC (Wed) by fuhchee (guest, #40059) [Link] (7 responses)

Looking at your last paragraph, I can't match up these two claims: ... it may well be that for the majority of users, the annoyance and dangers are just not worth it ... and ... a problem like Rawhide suffered seems unlikely to occur in more polished, non-development releases ...

What "annoyance and dangers" does "that majority" experience, other than that single rawhide bug (which by nature is not used by many people)?

Is pre-linking worth it?

Posted Jul 15, 2009 17:20 UTC (Wed) by jake (editor, #205) [Link] (6 responses)

hmm, "unlikely" does not mean impossible. there are potential dangers associated with pre-linking, from possible bugs to security implications, as i thought i described in the article; perhaps i didn't communicate them well, though. the annoyances are things like prelink needing to run regularly, changing binary files which makes binary integrity harder to check, etc.

and, if i am not mistaken, Fedora would like to see *more* people use rawhide.

jake

Is pre-linking worth it?

Posted Jul 15, 2009 21:11 UTC (Wed) by nix (subscriber, #2304) [Link] (5 responses)

On 64-bit boxes, of course, there are pretty much no security implications
of prelinking: even if ASLR *is* statically determined when prelink is
active, the address space is large enough that an attacker has little
chance of success anyway. And the address space on 32-bit is small enough
that ASLR is at best a band-aid.

The danger with prelink isn't that it lets attackers bruteforce their way
past ASLR (they can do that anyway). It's that *if* they have a multistage
attack to carry out (guess ASLRed addresses then guess something else,
say) and *if* they can tell that ASLR has been defeated and *if* each
attack round involves exec()ing a new program (rather than fork()ing an
old one or using a thread pool), then they can eliminate the effects of
ASLR more rapidly and concentrate on the second part of the attack, if
prelink is in use.

I'm not sure this actually affects many programs. openssh is the only one
I can think of that actually exec()s a new copy of itself when a request
comes in (specifically to allow ASLR to rerandomize things). Apache
doesn't do this and neither does anything else I can think of except for
services run from inetd.

Can anyone think of any other network-facing programs this might affect?

(I don't prelink my 32-bit firewalls for exactly this reason. Boxes behind
the firewall get prelinked.)

Is pre-linking worth it?

Posted Jul 15, 2009 21:53 UTC (Wed) by jake (editor, #205) [Link] (4 responses)

> On 64-bit boxes, of course, there are pretty much no security implications
> of prelinking: even if ASLR *is* statically determined when prelink is
> active, the address space is large enough that an attacker has little
> chance of success anyway.

Hmm, the security problems with pre-linking don't center around brute-forcing library addresses, I don't think. Instead, if an attacker can run a program and see where libc (for example) ends up in their memory map, they can be pretty sure it will be in the same place for other targets of interest. For up to two weeks.

jake

Is pre-linking worth it?

Posted Jul 15, 2009 23:12 UTC (Wed) by nix (subscriber, #2304) [Link] (2 responses)

A *local* attacker? If a hostile attacker has got onto your system and can
observe /proc/self/maps for security-critical programs (say, those running
as root) you have already lost, ASLR or no ASLR. If you are concerned
about hostile users, chmod such programs so that only root can execute
them, or explicitly 'prelink --undo' them to re-enable ASLR for those
programs.

I was talking about ASLR's role in stopping them from getting on in the
first place, when they have to use 'did {this part of} the exploit work?'
as an oracle.

Is pre-linking worth it?

Posted Jul 15, 2009 23:25 UTC (Wed) by jake (editor, #205) [Link] (1 responses)

> A *local* attacker? If a hostile attacker has got onto your system and can
> observe /proc/self/maps for security-critical programs

I think we are talking across each other. If an attacker can do 'cat /proc/self/maps', and thus see the memory map of 'cat', they will see where libc is mapped. On a pre-linked system, that is the likely place it is mapped for *other* interesting programs. So, a local attacker, or one who can get map information for simple, non-security-critical programs, can then use that information in a buffer overflow or other exploit of the security-critical program of interest (assuming it links to libc).

*that* is a security hole for pre-linking ...

(btw, your email notifications are bouncing :)

jake

Is pre-linking worth it?

Posted Jul 16, 2009 0:36 UTC (Thu) by nix (subscriber, #2304) [Link]

Agreed: prelink is problematic iff you have hostile local users, but if
you avoid prelinking privileged programs, libc will be mapped elsewhere
for those programs.

(Sorry about the bounces: my ISP, Zetnet, has gone insane, changing my
static IP but failing to tell me the new IP address ahead of time, failing
to update my DNS zone even when specifically requested, so all its RRs
still point to the old address, messing up the MX relay so that all
incoming email, even to their own DNS administrator, gets bounced rather
than queued, and failing to give me a MAC code (horrid UK-specific
broadband techspeak) as they are legally obliged to, so I can't even
switch to a new ISP. It's absolutely crazy, and I can do nothing
whatsoever about it. God knows what email I've lost.

To make this Linux-relevant, let this be a lesson to everyone in the
dangers of making Debian developers redundant, especially when they know
the network from top to bottom and you're just about to carry out major
changes to that network. In fact it's better never to make Debian
developers redundant at all. Actually it's best if you just give them lots
of money without their even needing to ask. Really.)

Is pre-linking worth it?

Posted Jul 16, 2009 6:05 UTC (Thu) by PaXTeam (guest, #24616) [Link]

ASLR was never meant to protect against local attacks, see http://lwn.net/Articles/334027/ for more explanation. the two week period is a vulnerability however for it's plenty of time to remotely brute force guess addresses (that is, when one cannot already leak addresses). that's why ASLR is really only meaningful when you can limit brute force search.

Why modify the executable?

Posted Jul 15, 2009 17:17 UTC (Wed) by epa (subscriber, #39769) [Link] (13 responses)

Modifying the executable by adding another ELF section sounds messy. Why not a separate file foo.prelink storing the information? Then ld.so can use this file if it exists and if ld.so is configured to take notice of prelink info. The original binary is unchanged.

Why modify the executable?

Posted Jul 15, 2009 18:51 UTC (Wed) by i3839 (guest, #31386) [Link]

Because that would cause an extra disk seek, destroying any benefit prelink might give.

Why modify the executable?

Posted Jul 15, 2009 19:26 UTC (Wed) by Los__D (guest, #15263) [Link]

How is adding another file less messy than adding another ELF section?

Why modify the executable?

Posted Jul 15, 2009 21:04 UTC (Wed) by nix (subscriber, #2304) [Link] (10 responses)

That's inaccurate. prelink modifies the relocations in the executable
itself: it doesn't add a new section describing them.

It *does* add a section describing the 'conflicts' that it *cannot* ---
there are two per C++ virtual method table, for instance --- and a section
allowing it to undo its own work, but there is no new section describing
relocations. The glory of relink is that to a large extent all the dynamic
loader needed to do to work with it was to know when to get out of the
way.

And it has a huge positive benefit for KDE, so large that a kludge
(kdeinit) had to be implemented specifically to prevent a massive slowdown
when prelink wasn't active (also because without kdeinit you'd have
to execute a new program to handle *every single URL* that was loaded
while loading a webpage, for instance). That's because KDE programs link
against a lot of C++ shared libraries, and even with .gnu.hash
ameliorating the 'long C++ symbols take ages to relocate' problem, the
libraries are so large that relocation takes ages (even with DT_BIND_NOW
off: some of the relocations have to be processed immediately and can't be
handled lazily).

An example: on my KDE 3.5.10 system here, kio_http (a shared library
itself) and dependent libraries contain a total of 48929 symbols with an
average length of 31.9. (Taking C++ symbols only, the average symbol
length is 35.4: most of the symbols are C++). You can expect a
nonprelinked KDE program to use a meg or so of extra dirty nonshareable
memory just for the relocations (I haven't looked at this figure for some
time: it'll be worse on 64-bit boxes but of course they normally have more
memory anyway).

That is *not* small. prelink really does help with monsters like this,
even with DT_GNU_HASH to help speed up relocation when it must happen.

And programs are only going to get bigger.

Why modify the executable?

Posted Jul 15, 2009 22:48 UTC (Wed) by pynm0001 (guest, #18379) [Link] (1 responses)

A couple of small quibbles: Although KDE downloads webpages through a separate process, it does not necessarily have to fork one for every link visited. Existing kio_http processes (procs, not shared libs) are retained for some amount of time after they've been forked in case they are needed to be reused.

kdeinit existed before prelink, although essentially to fix the same problem.

Why modify the executable?

Posted Jul 15, 2009 23:15 UTC (Wed) by nix (subscriber, #2304) [Link]

Ooo, kio_http is in effect thread-pooled?

[digdig]

So it is! I never knew that. Nifty (also bleeding obvious in hindsight,
given that I can see some hanging around even now, when I'm not actively
downloading any web pages. Ah well, the glory of the Internet is that one
can display one's ignorance in front of thousands).

Are those C++ symbols all external?

Posted Jul 16, 2009 10:00 UTC (Thu) by sdalley (subscriber, #18550) [Link] (5 responses)

nix, is your KDE 3.5.10 built with a recent compiler? All those 48929 C++ library symbols - do they *have* to be externally visible?? Or are most of them object-internal/mangled-static symbols which shouldn't need to be involved in the linking at all? The -fvisibility compiler option is supposedly used (gcc 4.1 and later) to restrict the visibility of internal symbols, are there 48929 externals in spite of that?? That's a *huge* interface surface to document and test if so.

Are those C++ symbols all external?

Posted Jul 16, 2009 17:47 UTC (Thu) by nix (subscriber, #2304) [Link] (4 responses)

That's in spite of using hidden visibility, yes. (The whole thing was
built with GCC 4.3.2.)

(Actually I'm not sure whether hidden visibility ever really worked with
KDE 3.x. I'll look again at a KDE4 installation as soon as I unbreak mine
far enough that a decent subset of the libraries are actually there and
have the symbols they're meant to have.)

Are those C++ symbols all external?

Posted Jul 19, 2009 7:36 UTC (Sun) by dirtyepic (guest, #30178) [Link] (3 responses)

it supposedly does work with KDE 3, or at least I remember us having to disable it repeatedly as support for hidden visibility matured over GCC releases. these days though it works pretty smoothly. note, however, that it's disabled by default. you have to pass --enable-gcc-hidden-visibility to configure at build time.

Are those C++ symbols all external?

Posted Jul 19, 2009 11:25 UTC (Sun) by nix (subscriber, #2304) [Link] (2 responses)

Last time I did that it refused, telling me Qt (3.3.8 with security fixes)
didn't have sufficient support. I think it needs distro patches to Qt...

Are those C++ symbols all external?

Posted Jul 20, 2009 4:39 UTC (Mon) by dirtyepic (guest, #30178) [Link] (1 responses)

I think you're right. I never noticed that before. I find it odd that a feature in KDE would require external patches to Qt, but I guess that's the kind of situation they were in at the time.

Are those C++ symbols all external?

Posted Jul 20, 2009 6:50 UTC (Mon) by nix (subscriber, #2304) [Link]

Getting hidden visibility to work with C++ was tricky, as I recall. You
certainly have to appropriately mark ancestors of classes that are being
so marked to get any real reduction in symbol count, which means Qt does
need marking.

Why modify the executable?

Posted Jul 20, 2009 18:14 UTC (Mon) by oak (guest, #2786) [Link] (1 responses)

> on my KDE 3.5.10 system here, kio_http (a shared library itself) and
dependent libraries contain a total of 48929 symbols with an average
length of 31.9. (...). You can expect a nonprelinked KDE program to use a
meg or so of extra dirty nonshareable memory just for the relocations

I would expect that many symbols to take several megs nonshareable memory
for the relocations per process...?

Why modify the executable?

Posted Jul 20, 2009 21:55 UTC (Mon) by nix (subscriber, #2304) [Link]

Stop expecting me to do difficult things like multiplying numbers by a
fixed constant. :) And yes, you're right, it is several megs.

Is pre-linking worth it?

Posted Jul 15, 2009 18:03 UTC (Wed) by yaneti (subscriber, #641) [Link] (1 responses)

For the hack that it is prelinking has been incredibly stable and unobtrusive on all the Fedora production boxes that I've used. The rawhide problem was only the second time that it caused something of a pain for me, and the first one was years ago and not nearly as severe.

"Unobtrusive"?

Posted Jul 17, 2009 18:27 UTC (Fri) by kamil (guest, #3802) [Link]

"Unobtrusive"? LOL.

I remember from the days when I used Fedora that prelink was one of the first things I would disable after a new install/update (selinux was another). As I remember it, excessive I/O caused by periodic prelink re-linking was a major annoyance, and made my Linux boxes feel as if they were running Windows. Prelink was also causing subtle problems, e.g., to win32 plugins in mplayer, IIRC.

But hey, it's been years ago, maybe things have improved since then.

Is pre-linking worth it?

Posted Jul 15, 2009 18:06 UTC (Wed) by arjan (subscriber, #36785) [Link] (10 responses)

In Moblin we also use prelink; we found that in addition to the mentioned arguments, we also can avoid reading whole chunks of programs from disk into memory, so we save a bunch of time (waiting for that disk seek) and memory.

Rough estimates on a whole system are between 15% to 20% reduction in the payload we read from the disk. That was sufficient for us to decide to use prelink by default.

Is pre-linking worth it?

Posted Jul 15, 2009 22:19 UTC (Wed) by nix (subscriber, #2304) [Link] (7 responses)

That figure is surprising to me, as it would imply that 20% of your
average program was relocation sections, or that they are very scattered
in the executable, which should not be the case. (As relocations are
incurred disproportionately at startup, it might mean simply that you ran
a lot of programs that exited fast, but those would be in cache and thus
not read from the disk.)

(glibc is linked with --bind-now, but that doesn't affect things much: it
causs immediate relocation, but only of those symbols in glibc that are
used, not of all of them.)

Is pre-linking worth it?

Posted Jul 16, 2009 3:21 UTC (Thu) by tbird20d (subscriber, #1901) [Link] (6 responses)

Reducing "payload" by 20% doesn't mean that 20% of the whole program is relocation, merely 20% of what gets loaded. Due to demand paging, rarely does the entire program get loaded from disk. Also, I'm not sure, but it may be that doing ordinary linking requires pages to be loaded for relocation which otherwise would not have been loaded at all.

Is pre-linking worth it?

Posted Jul 16, 2009 3:35 UTC (Thu) by nix (subscriber, #2304) [Link] (5 responses)

Yes, but even so, 20% is a terribly high figure. The test program must
have called a very large number of symbols per text page for this to be
true. (Possibly it was a very small test binary, /bin/cat or something
like that: the essentially random ordering of symbols in glibc would have
required many of its pages to be faulted in for relocation processing,
leading to an artificially high figure.)

Is pre-linking worth it?

Posted Jul 16, 2009 4:35 UTC (Thu) by arjan (subscriber, #36785) [Link] (2 responses)

this wasn't a test program; it was a whole OS boot + login into the UI/desktop.

Is pre-linking worth it?

Posted Jul 16, 2009 4:39 UTC (Thu) by arjan (subscriber, #36785) [Link]

(and yes, I was surprised by how much it was as well, but it made it a no-brainer to turn on for us)

Is pre-linking worth it?

Posted Jul 23, 2009 16:57 UTC (Thu) by jgg (subscriber, #55211) [Link]

I've got similar results from some of my embedded work too. prelinking saves about 3-5% of system ram in my cases because it prevents paging in of the relocation and symbol tables, and it doesn't dirty as many pages in the shared libraries.

20% doesn't seem that surprising to me, glibc for instance has 50k you have to load just to do symbol resolution. You'd only need to fault in 250k of text from glibc to get to 20% overhead. Even my desktop has only faulted in 224kb of glibc.

Is pre-linking worth it?

Posted Jul 16, 2009 11:22 UTC (Thu) by mjthayer (guest, #39183) [Link] (1 responses)

Perhaps because the pre-linking made it unnecessary to open certain on-disk library files to do the symbol look ups, putting that off until symbols from them were actually needed? Just a guess.

Is pre-linking worth it?

Posted Jul 16, 2009 17:51 UTC (Thu) by nix (subscriber, #2304) [Link]

No, relocations within a given library would happen when that library was
opened: the relocation process cannot trigger additional library opens
that weren't already going to happen.

In any case, most library opens during system startup will be driven by
DT_NEEDED, in which case the whole dependency tree of them gets mmap()ed
immediately.

Is pre-linking worth it?

Posted Jul 17, 2009 14:32 UTC (Fri) by fcrozat (subscriber, #175) [Link]

That is strange : for all the benchmarks I ran while working on Mandriva boot speed, prelink was always (to my surprise) causing regressions in boot speed (until desktop environment is up and running).

Do you have some detailed benchmarks or data on your findings ?

Prelink in Moblin

Posted Nov 12, 2009 20:06 UTC (Thu) by dottedmag (subscriber, #18590) [Link]

Out of curiosity: do you run prelink during first run or while building system images?

Is pre-linking worth it?

Posted Jul 15, 2009 21:01 UTC (Wed) by louai (guest, #58033) [Link] (4 responses)

I always thought the -Bdirect linking patch was interesting: http://sourceware.org/ml/binutils/2005-10/msg00436.html

Too bad Ulrich Drepper immediately shot it down with his usual finesse: "Forget it, we have prelinking, which is much more efficient."

Is pre-linking worth it?

Posted Jul 15, 2009 21:15 UTC (Wed) by mjthayer (guest, #39183) [Link] (1 responses)

So that basically throws away the venerable Unix global application namespace which has burnt so many innocents? And saves start-up time into the bargain? Sounds nice, if the couple of programmes that depend on the traditional behaviour can be found and fixed.

Is pre-linking worth it?

Posted Jul 17, 2009 18:38 UTC (Fri) by quotemstr (subscriber, #45331) [Link]

Sounds nice, if the couple of programmes that depend on the traditional behaviour can be found and fixed.
Couldn't the runtime linker to enumerate all matches for a given symbol and log any ambiguity?

Is pre-linking worth it?

Posted Jul 16, 2009 8:48 UTC (Thu) by dgm (subscriber, #49227) [Link]

Looks very good (shame Ulrich, shame). Any opportunity to revive it?
On a side note, has Ulrich been a XFree86 developer in another life or what?
Oh, that was harsh, sorry.

Is pre-linking worth it?

Posted Jul 23, 2009 20:34 UTC (Thu) by marco_craveiro (guest, #59774) [Link]

-Bdirect was actually discussed in lwn a little while ago, in a very interesting article:

http://lwn.net/Articles/192624/

i wonder how much of what michael was hacking actually made it upstream...

Is pre-linking worth it?

Posted Jul 15, 2009 21:34 UTC (Wed) by leonb (guest, #3054) [Link] (8 responses)

Although my "objprelink" program has long been obsolete,
its web page <http://objprelink.sourceforge.net/> explains
in detail why the c++ abi breaks the classic tricks used by
dynamical loaders to speed up the startup times. It also
discusses and benchmarks methods that can improve the situation,
including (section 6) a brief discussion of the "prelink" program,
including its ability to improve the memory usage.
Prelinking was still very experimental in 2002.

The conclusion was simple methods (now used by everyone)
were sufficient to make the load-time linking time small in
comparison with the KDE3 application startup time.
Large speedups should be found elsewhere.


Is pre-linking worth it?

Posted Jul 15, 2009 22:44 UTC (Wed) by mjthayer (guest, #39183) [Link] (7 responses)

> Large speedups should be found elsewhere.
I somehow feel that after the impressive work done on kernel boot times, this should be the next candidate. Booting to a desktop with firefox, oo.o and gimp running in 15 seconds - that would be something to impress people with... although right now I don't quite believe in it.

Is pre-linking worth it?

Posted Jul 15, 2009 22:48 UTC (Wed) by arjan (subscriber, #36785) [Link] (6 responses)

why?

we already have that working.... on a laptop.

OOo on a netbook is a bit painful still.

Is pre-linking worth it?

Posted Jul 15, 2009 22:56 UTC (Wed) by mjthayer (guest, #39183) [Link] (5 responses)

Wow. On my decently powered laptop, firefox feels like it needs a minute to start (maybe it does, I should time it). I hope others learn from what you have achieved!

Is pre-linking worth it?

Posted Jul 16, 2009 0:47 UTC (Thu) by etrusco (guest, #4227) [Link] (3 responses)

It probably does.
While you're at it, create a new profile and time its startup too; but don't ask me why the difference...

Is pre-linking worth it?

Posted Jul 16, 2009 4:17 UTC (Thu) by zlynx (guest, #2285) [Link] (1 responses)

You can get some speed back by doing a "vacuum" on your Firefox profile's sqlite tables.

Is pre-linking worth it?

Posted Jul 23, 2009 21:37 UTC (Thu) by ariveira (guest, #57833) [Link]

Yep;

for f in ~/.mozilla/firefox/*/*.sqlite; do sqlite3 $f 'VACUUM;'; done

from time to time

Is pre-linking worth it?

Posted Jul 16, 2009 5:00 UTC (Thu) by bvdm (guest, #42755) [Link]

Apparently NSS runs through your cache to seed its random number generator. May be something like that.. o_O

Is pre-linking worth it?

Posted Jul 16, 2009 10:54 UTC (Thu) by Tet (subscriber, #5433) [Link]

On my decently powered laptop, firefox feels like it needs a minute to start

Yes. But that's largely because the Firefox developers are clueless and have built a bloated monster. I'm no Opera fanboy, but I have to admire the fact that it launches in 3 seconds from a cold cache. FWIW, Firefox managed to completely outdo itself on my decently powered home desktop last night, taking 14 minutes from invocation to first window appearing. Yes, I kept checking the process table to see that it really was there and trying to start. Yes, the machine was running slowly due to other activity on the box. But still, even under normal conditions, it takes a minute or so to launch, and this is a 3GHz machine with plenty of RAM. That's just insane...

tripwire and virtual machines

Posted Jul 15, 2009 23:33 UTC (Wed) by tridge (guest, #26906) [Link] (1 responses)

I've hit two other problems with pre-linking.

The first is when I use tools that try to detect breakins by checking for changes in the MD5 sum of binaries. You can get a lot of false alarms after pre-linking.

The second is virtualisation. I often use kvm with qcow2, using a shared base image between machines. When pre-linking runs, the binaries are rewritten inside each virtual machine, meaning they are no longer in common. This makes it slower, as well as taking up a lot more disk.

For most servers I think pre-linking isn't worth it. For desktop machines perhaps it's worthwhile, although I'm dubious there too.

Cheers, Tridge

tripwire and virtual machines

Posted Jul 16, 2009 4:49 UTC (Thu) by gmaxwell (guest, #30048) [Link]

It's possible to get the unprelinked binary by processing the prelinked one. RPM's verify does this, so you can still do intrusion detection.

Of course, that process could be compromised but so could your md5sum. ::shrugs::

Is pre-linking worth it?

Posted Jul 18, 2009 11:08 UTC (Sat) by magnus (subscriber, #34778) [Link] (1 responses)

Why is long start-up times due to symbol relocation mainly an issue with C++ libraries when C libraries like the GTK stack also have object-oriented functionality with vtables etc implemented in software?

In the C++ case, the compiler both generates the vtable structure itself and it knows details of the machine the code will run on, so one would think with a good ABI and compiler the C++ code should always be faster than the C implementation.

The only advantage of C implementations like GObject I can think of, is that the order in the vtable (class structure) is hard-coded in header files, and new functions are added to the end of the structure to maintain backward compatibility. C++ can not do the same trick, because the compiler doesn't know which virtual methods existed in earlier versions of the library. This lack of information makes the C++ run-time relocation problem more complex.

Is pre-linking worth it?

Posted Jul 23, 2009 7:35 UTC (Thu) by renox (guest, #23785) [Link]

>Why is long start-up times due to symbol relocation mainly an issue with C++ libraries when C libraries like the GTK stack also have object-oriented functionality with vtables etc implemented in software?

Probably because C++ ABI isn't good..

Is pre-linking worth it?

Posted Jul 23, 2009 10:57 UTC (Thu) by roc (subscriber, #30627) [Link]

Startup performance is very important because it has a hugely disproportionate impact on how people perceive the software, even though it doesn't actually amount to a lot over the running time of the program.


Copyright © 2009, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds