|
|
Subscribe / Log in / New account

Developments in the GCC world

This article brought to you by LWN subscribers

Subscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, please buy a subscription and make the next set of articles possible.

By Jake Edge
March 18, 2009

As GCC nears its 4.4 release, there are a number of criteria that need to be met before it can be released. Those requirements—regressions requiring squashing—have been met, but things are still stalled. A number of issues were raised with the changes to the runtime library exemption that have caused the release, and a branch that will allow new development into the GCC tree, to be delayed until that is resolved. In the meantime, however, GCC development is hardly standing still, there are numerous interesting ideas floating around for new features.

Changing the runtime library exemption was meant to allow the creation of a plugin API for GCC, so that developers could add additional analysis or processing of a program as it is being transformed by the compiler. The Free Software Foundation has long been leery of allowing such a plugin mechanism because they feared that binary-only GCC plugins of various sorts might be the result. In January, though, the FSF announced that it would change the exemption—which allows proprietary programs to link to the GCC runtime library—in order to exclude code that has been processed by a non-GPL "compilation process". It is a bit of license trickery that will only allow plugins that are GPL-licensed.

Shortly after the new exception was released, there were some seemingly substantive issues raised on the gcc-devel mailing list. Ian Taylor neatly summarized the concerns, which break down into three separate issues:

  • Code that does not use the runtime library and its interfaces at all might not be interpreted as included in the definition of an "Independent Module", which would then disallow it from being combined with the GCC runtime libraries. The code that fell outside of the "Independent Module" definition would not be affected directly, but combining it with other, compliant code that did use the runtime library would be disallowed.

  • There are questions about whether Java byte code should be considered a "high-level, non-intermediate language". It is common to generate Java byte code using a non-GCC compiler, but then process it with gcj.

  • There is also a hypothetical question about LLVM byte code and whether it should be considered a "high-level, non-intermediate language" as well.
Definitions of terms makes up the bulk of the runtime library exemption, so it is clearly important to get them right. The first issue in Taylor's summary seems like just an oversight—easily remedied—but the last two are a little more subtle.

By and large, the byte code produced as part of a compiler's operation is just an intermediate form that likely shouldn't be considered a "high-level, non-intermediate language", but Java and LLVM are a bit different. In both cases, the byte code is a documented language, somewhat higher level than assembly code, which, at least in the case of LLVM, is sometimes hand-written. For Java, non-GPL compilers are often used, but based on the current exemption language, the byte code from those compilers couldn't be combined with the GCC runtime libraries and distributed as a closed source program. Since LLVM is GPL-compatible, there are currently no issues combining its output with the GCC runtime, but Taylor is using it as another example of byte code being generated by non-GCC tools.

In addition to laying out the issues, Taylor recommends two possible ways forward. One of those is to clarify the difference between a compiler intermediate form and a "high-level, non-intermediate language". The other is to expand the definition of an eligible compilation process to allow any input to GCC that is created by a program that is not derived from GCC. Trying to make the former distinction seems difficult to pin down in any way that can't be abused down the road, so the second might be easier to implement. After all, the GCC developers can determine what kinds of input the compiler is willing to accept.

This may seem like license minutiae to some—and it is—but it is important to get it right. The FSF has chosen to go this route to prevent the—currently theoretical—problem of proprietary GCC plugins, so they need to ensure that they close any holes. As Dave Korn pointed out in another thread, releasing anything using an unclear license could create problems down the road:

If there's a problem with the current licence that would open a backdoor to proprietary plugins, and we ever release the code under that licence, evaders will be able to maintain a fork under the original licence no matter how we subsequently relicense it.

Meanwhile, GCC developers have been working on reducing the regressions so that 4.4 can be released. Richard Guenther reported on March 13 that there were no priority 1 (P1) regressions, and less than 100 overall regressions, which would normally mean that a new branch for 4.4 would be created, with 4.5 development being added to the trunk. But, because of the runtime library exception questions, Richard Stallman asked the GCC Steering Committee (SC) to wait for those to be resolved before branching.

The delay has been met with some unhappiness amongst GCC hackers. Without a 4.4 release branch, interesting new features are still languishing in private developer branches. As Steven Bosscher put it:

But there are interactions between the branches, and the longer it takes to branch for GCC 4.4, the more difficult it will be to merge all the branches in for GCC 4.5. So GCC 4.5 is *also* being delayed, not just GCC 4.4.

What is also being held back, is more than a year of improvements since GCC 4.3.

Bosscher suggested releasing with the old exemption for 4.4 and fixing the problems in the 4.5 release. While that could work, it would seem that Stallman and the SC are willing to give FSF legal some time to clarify the exemption. In the end, though, the point is somewhat moot as there is, as yet, no plugin API available.

As part of the discussion of the new runtime library exception, Sean Callanan sparked a discussion about a plugin API by mentioning some of the plugins his research group had been working on. That led to various thoughts about the API, including a wiki page for the plugin project and one for the API itself. Diego Novillo has also created a branch to contain the plugin work.

The basic plan is to look at the existing plugins—most of which have implemented their own API—to extract requirements for a generalized API. In addition to the plugins mentioned by Callanan, there are others, including Mozilla's Dehydra C++ analysis tool, the Middle End Lisp Translator (MELT), which is a Lisp dialect that allows the creation of analysis and transformation plugins, and the MILEPOST self-optimizing compiler. Once the license issues shake out, it would appear that a plugin API won't be far behind.

There are other new features being discussed for GCC as well. Taylor has put out a proposal to support "split stacks" in GCC. The basic idea is to allow thread stacks to grow and shrink as needed, rather than be statically allocated at a particular size. Currently, applications that have enormous numbers of threads must give each one the worst-case stack size, even when it might go unused during the life of that thread. So, this could reduce memory usage, thus allowing more threads to run, but it would also alleviate the need for programmers to consider stack size for applications with thousands or millions of threads.

Another feature is link-time optimization (LTO), which is much further along than split stacks. Novillo put out a call for testers of the LTO branch in late January. There are a number of optimizations that can be performed when the linker has access to information about all of the compilation units. Currently, the linker only has access to the object files that are being collected into an executable, but LTO would put the GCC-internal representation (GIMPLE) into a special section of the object file. Then, at link time (but not actually implemented by the linker), various optimizations based on the state of the whole program could be performed. The kinds of optimizations that can be done are outlined in a paper [PDF] on "Whole Program Optimizations" (WHOPR) written by a number of GCC hackers including Taylor and Novillo.

While it is undoubtedly disappointing to delay GCC 4.4, hopefully the license issues will be worked out soon and the integration of GCC 4.5 can commence. In the interim, work on various features—many more than are described here—is proceeding. The FSF has always had a cautious approach to releases—witness the pace of Emacs—but sooner or later, we will see GCC 4.4, presumably with a licensing change. With luck, six months or so after that will come GCC 4.5 with some of these interesting new features.



(Log in to post comments)

Split stacks

Posted Mar 18, 2009 20:10 UTC (Wed) by avik (guest, #704) [Link]

I don't understand the motivation for split stacks. Okay, on a 32-bit system with hundreds of thousands of threads you will run out of virtual address space unless the program is carefully crafted. But a program with hundreds of thousands of threads must be carefully crafted anyway; and it would be better off running a 64-bit (or 47-bit) system anyway.

Split stacks

Posted Mar 18, 2009 21:26 UTC (Wed) by elanthis (guest, #6227) [Link]

Just because you're on a 64-bit system doesn't mean you have hundreds of gigs of memory. :)

Also, embedded devices are often places where threads are used very often, partially just avoid the overhead of full processes, and also because embedded systems are often the ones that want realtime behavior all over the place (and hence a single thread is not ideal).

Split stacks

Posted Mar 18, 2009 23:17 UTC (Wed) by avik (guest, #704) [Link]

Just because you've mapped hundreds of gigs of memory, doesn't mean you'll allocate all of it.

Memory (on most systems) is demand allocated; when a page is touched it is allocated and linked into the page tables. Which is just what split stacks tries to accomplish, but without the overhead (or rather, we're already paying the overhead, so at no incremental cost).

Split stacks

Posted Mar 19, 2009 0:04 UTC (Thu) by elanthis (guest, #6227) [Link]

One assumes that all the threads are runnable immediately after creation and hence are likely to touch their own stack and cause it to be allocated. The split stack if I understand makes it so the amount allocated for threads with small stacks is much smaller than the full 4K stack size, so the 4K page touched and mapped isn't entirely used up by just one thread.

Split stacks

Posted Mar 21, 2009 16:41 UTC (Sat) by nix (subscriber, #2304) [Link]

The default thread stack size is 4*Mb*, not 4Kb! This is not the kernel
with its harsh nonswappable resource limits.

Split stacks

Posted Mar 21, 2009 17:19 UTC (Sat) by avik (guest, #704) [Link]

1. The default can be changed, even without split stacks.
2. Even with 4MB stacks, stack allocation occurs on demand, so if a thread only touches a page, it will only allocate a page.

Split stacks

Posted Mar 21, 2009 23:35 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

However, it must be mapped immediately. So it's fairly easy to run out of _address_ _space_.

Split stacks

Posted Mar 19, 2009 0:47 UTC (Thu) by xoddam (subscriber, #2322) [Link]

The split stack mechanism can cope with stacks allocated in chunks smaller than a page at a time, on processors without an MMU. At the extreme, each function's stack frame could be allocated Simula-fashion on the heap (though the proposal uses a slightly more complicated mechanism that allocates contiguous chunks for use by as many frames as can fit).

Split stacks

Posted Mar 19, 2009 13:38 UTC (Thu) by avik (guest, #704) [Link]

Okay for nommu or <4K stacks this makes more sense, but does one actually run millions of threads on nommu hardware?

Also, if a stack is much smaller than 4K, the state is better off represented in a heap object, and the entire application multiplexed on one thread (with the complex stack-consuming bits on a thread pool).

I just don't see a lot of use for this.

Split stacks

Posted Mar 19, 2009 14:22 UTC (Thu) by ikm (guest, #493) [Link]

I do. This would finally allow me not to think about which stack size to choose. Set it hoo high and you can end up with no available address space, set it too low and raise the possibility of an overflow in each thread.

Split stacks

Posted Mar 19, 2009 19:35 UTC (Thu) by avik (guest, #704) [Link]

Out of interest, can you describe the application?

Split stacks

Posted Mar 19, 2009 22:17 UTC (Thu) by ikm (guest, #493) [Link]

It wasn't really application-centric -- pretty much any application which may spawn an unspecified amount of threads would do.

Split stacks

Posted Mar 20, 2009 8:36 UTC (Fri) by avik (guest, #704) [Link]

That's not true. If each thread allocates significant amounts of memory (say, holds open a tcp connection), then the 4K allocated by a stack would be insignificant.

To benefit from this, an application would need to spawn millions of threads, each of which allocates very little other memory. It's a very specialized use case.

Split stacks

Posted Mar 20, 2009 9:33 UTC (Fri) by ikm (guest, #493) [Link]

A thread may allocate a significant amount of memory on the stack. In a case where it can vary between "very little" and "quite a lot", split stacks would be useful.

Split stacks

Posted Mar 20, 2009 12:25 UTC (Fri) by avik (guest, #704) [Link]

Right. I'm having a hard time a finding concrete example where this occurs.

Split stacks

Posted Mar 20, 2009 14:23 UTC (Fri) by ikm (guest, #493) [Link]

Why do you need to?

Split stacks

Posted Mar 20, 2009 15:53 UTC (Fri) by avik (guest, #704) [Link]

In order to understand the motivation behind the effort to implement this feature.

Split stacks

Posted Mar 20, 2009 16:12 UTC (Fri) by avik (guest, #704) [Link]

Actually, I can think of one user of this feature: the Linux kernel. Linux does not allocate stacks out of virtual memory, so sparse allocation would not work for it, and already has trouble on i386: if you use 4k stacks you are at risk of overflow; if you use 8k stacks you risk allocation failures since memory can easily be fragmented.

I wonder if that is the motivation for the feature.

Split stacks

Posted Mar 20, 2009 22:15 UTC (Fri) by jlokier (guest, #52227) [Link]

It's useful with just one thread on noMMU hardware.

A big problem with noMMU is stack overflow if the stack is too small, or wasted memory if it's too large. You have to fix the stack size when the app starts, regardless of its runtime behaviour.

Stack overflows cause mysterious memory corruption. (The regex library in uclibc is a particular culprit for using lots of stack).

If you allocate a huge stack to avoid the possibility of stack overflow (say a 256k stack - rsync needs that just for its local variables), that wastes a lot of memory on noMMU because it's not demand paged. Also, it requires a large contiguous allocation, which fails if your memory is fragmented.

Split stacks would solve all these problems nicely.

Split stacks

Posted Mar 18, 2009 23:20 UTC (Wed) by njs (guest, #40338) [Link]

Developments in the GCC world

Posted Mar 18, 2009 21:05 UTC (Wed) by rvfh (guest, #31018) [Link]

Why can't they just say 'GCC plugins must be released under the GPL v3 license or later (...) and their source code made publicly available'? Is that not legally valid?

Why is it so complicated and convoluted?

Developments in the GCC world

Posted Mar 18, 2009 21:32 UTC (Wed) by SLi (subscriber, #53131) [Link]

That works if the plugin is a derivative work, which it probably would be
if it's linked into GCC (but the OSS world tends to assume too much about
the derivative work status, putting clear rules on it like linking makes
it derived but separate processes are not -- it's for a court to
determine, and I bet it won't be that clear cut).

But if there's just a part that reads some kind of intermediate language
linked into GCC and writes it out, then that can be licensed under GPLv3
while keeping the modules processing that interface proprietary. This is
what they want to prevent.

It's a tricky thing, because they just can't say "if you do that, it must
be GPL licensed" any more than you can say "The next version of Windows
must be GPL licensed". If it's not a derivative work, it's independent,
and its creators, not you, dictate the licensing terms.

Putting the restriction in libgcc is a clever way around this limitation,
but only works for works that are derived from libgcc, which at least by
FSF's interpretation includes any program that links with libgcc.
Basically, by having their code be part of another people's programs (even
if by linking), they legally get to say how it can be used. That wouldn't
necessarily (the lawyers don't really know either so we mundanes probably
shouldn't even try to guess which way the scale tips in a court) be the
case with well-separated gcc plugins.

Developments in the GCC world

Posted Mar 22, 2009 8:11 UTC (Sun) by dlang (guest, #313) [Link]

if it's not a derived work, what justification do you have to restrict it as part of your copyright license?

the justification for the 'no linking' stuff is that the result is a derivitive work.

if it is, you can lay the requirement that the result must be under GPLv3, if it's not you don't have any say in it.

Developments in the GCC world

Posted Mar 27, 2009 17:32 UTC (Fri) by mmarq (guest, #2332) [Link]

"" That works if the plugin is a derivative work, which it probably would be
if it's linked into GCC (but the OSS world tends to assume too much about
the derivative work status, putting clear rules on it like linking makes
it derived but separate processes are not -- it's for a court to
determine, and I bet it won't be that clear cut). ""

Clearly the worst kind of trap that OSS and GCC could get into. A Legal trap.

THERE ISN'T NOTHING NEW about the problem. WHY is it now an issue with GCC. Linux Kernel has the same problem. Yet nothing has prevented 3th parts to populated it with proprietary wrappers and modules, not directly in the dev trees but perhaps in >90% of REAL WORLD implementations by the users hands( ATI and Nvidia prop. drivers the more obvious ones)... and no matter what FSF or the LF have to say about it!...

SO it ain't going to be different with GCC.

Thought improvements have been made the "opus" is the same. A "partizan's" approach of no Interfaces (APIs). And it is wrong because it WILL NEVER SOLVE the proprietary meddling. Jz.. and have been saying this for 10 years now and facts haven't proved me wrong yet !...

GCC and Linux should be much better by defining low lever APIs, no matter how many and how often those change, as long it is not for the pleasure of "politics"... and yes, they could and should be partizan about accepting proprietary stuff into their official development trees, but its time to recognize that they don't stand a change in FORCING EVERYTHING OSS...

I mean, i'm all for OSS, as prove is the close to 10 years in this forum, but no politics is preventing me to use any "closed source" stuff, as long as it is clearly better, specially including device drivers and GCC plugins or extensions that would allow me to have a better usage and performance.

And a place where OSS is losing for a long time, in spite of the extraordinary efforts of KH, to get the HW vendors on board, is precisely the HW. There can be multiple implementation variations for a same piece of chipset... that is... there is always a small and very low level piece of code that fits better with a particular implementation than with others( ex: 2 or 3 different implementations of a particular GPU graphic cards...or even the same for different implementations of a NICs with the same chip).

Also the same applies for particular combination of CPU + GPGPU which GCC plugins should address. Its clear that the distinction are in smaller details but for OSS to get the upper hand in the OS ecosystem those must be addressed.

A lower level API, or an interface for modules that deal with those particularities be it in GCC or Linux would be THE NATURAL AND LOGIC WAY of things. Because a "GENERIC" driver module or a "GENERIC" GCC optimization plugin are clearly not the best solution.

And when it cames close to the HW is clearly visible the difficulty of OSS, dealing most of things by a "GENERIC" approach... because of "NEGATIVE POLITICS"... that is, crippling restrictions.

NO license, NOTHING HAS TO CHANGE... define those lower level interfaces in the best way possible FOR OSS DEVELOPERS... YES FOR OSS DEVELOPERS.

Perhaps some guy had 3 different brands(ex:Asus, Gygabyte, OCZ...) of implementations of adapters of the latest ATI GPU chip as example, and decided by fortunate reverse engineering, and good guessing, and rumored tips... to squeeze until the last drop of juice out of those 3 cards... ending up with 3 slightly different device drivers though the GPU chip is the same...

WOULDN'T DKMS OR SOMETHING SIMILAR THAT WOULD ALLOW TO IMPLEMENT THOSE DIFFERENCES BY THE WAY OF MODULES, LOADING THEM "DYNAMICLY" BY THAT "DKMS" INTERFACE A GOOD THING ?. INS'T IT A HELP FOR THAT OSS DEVELOPER ????( forget about prop. stuff developers now).

IF the Proprietary Devs are such a concern *harden the Interface* as a security and good dev method approach... but don't worry about stopping the use or proprietary stuff because end users will do it no matter what.

THE ONLY THING THAT CAN MAKE OSS BE PREFERRED ALWAYS, IS EXCELLENCE OF CODE, NOT PARTISAN BLOCKADES.

The same exact thing applies to GCC, 3 slightly different system configurations of the same CPU+GPGPU,(caches sizes, extensions...) can have 3 slightly different optimization modules...

SO WORRY ABOUT HELPING OSS DEVELOPERS GET THE EXCELLENCE THEY DESERVE... NOT EXCLUDING PROPRIETARY DEVELOPERS FROM YOUR TOOLS... AFTER ALL THE BIG QUESTION *BROADLY SPEAKING* IS IF THE OSS METHOD IS BETTER OR NOT ???

IF IT IS CLEARLY BETTER AS MANY BELIEVE, THEN WHY WORRY ??... IF IT IS NOT, WHY CONTINUE ??

Developments in the GCC world

Posted Mar 28, 2009 14:35 UTC (Sat) by nix (subscriber, #2304) [Link]

Your screaming about 'OSS DEVELOPERS' (please take the caps lock key off
your keyboard) appears to ignore that GCC is not only a free software
project but also one of the first such, started by RMS himself, and
FSF-copyrighted.

So these are *free software* developers, with all that that implies.

(And as for your screaming about stable APIs, you just prove that you know
next to nothing about GCC. Some of its internal interfaces, e.g. parts of
the machine description language, are more than twenty *years* old at this
point. The worry isn't about stable interfaces: it's that *exposing these
to other processes* or dynamically loadable entities might make
proprietary hooking too *easy*. Of course it's possible now: nothing stops
Microsoft hacking GCC to export GIMPLE or RTL to disk and read it back in
even now. But it's annoying and hard enough that nobody nefarious has done
that. If we do it for them, we have to ensure that the nefarious types who
might be attracted to it don't get a chance to do nefarious things, while
still allowing the nifty stuff.)

Developments in the GCC world

Posted Mar 29, 2009 2:56 UTC (Sun) by mmarq (guest, #2332) [Link]

Walking a shadowy valley in the mist of the fog... its not screaming, if i had the letters the size of the monitor screen still you wouldn't read it.

Really no offense but... it makes me smile!...

"" The worry isn't about stable interfaces: it's that *exposing these
to other processes* or dynamically loadable entities might make
proprietary hooking too *easy* ""

Strip the politics completely out of the way, and get the best interfaces humanly possible... that is what i rant about... forget that there # are (proprietary) entities that might make proprietary hooking too *easy* #... they can only be friends if they are not irrelevant...

SO WHO CARES about those ??... i mean, let them try. Isn't OSS a better paradigma and full of as best developers as anywhere else or even better ??

OSS should had been the preferred method for HW vendors device driver development a long time ago, as example... its SO INCREDIBLY OBVIOUS... but obviously it wont happen if ppl keep on shutting the door in the HW developers faces...

The same with GCC as the preferred compilation framework, to the point that not even Intel would have much advantage in having a ICC.

Lets face it, HW isn't going to be "open Sourced" for yet very long long years to came... and the only way that that might ever slightly happen is with a very **# fruitful cooperation #** ( i may not agree in many points but i thank KH for the extraordinary efforts in seeking HW manufacturers cooperation, effort which allowed me to continue to use Linux)...

** You... FSF, or the same block interests, are not fighting the proprietary developers, you are fighting the end users that are not developers **... and those ain't going to ask permission, and or the GPL prevent them, because they are not distributing.

Those that might gain by hooking proprietary stuff to GCC for better results, ARE THE LARGE MAJORITY OF END USERS... and they will move to something else if they are fighted over. SO DON'T CUT OUT the possibility of binary hooking... impose some *technical* rules to the interfaces to PREVENT ABUSE about how many things, and what can be done, about those eventual Binary modules insertion, that is, harden the interfaces but don't lock ppl out...

All developers in the proprietary side are eager for the collective grandeur and respective scale factor of a major OSS development tree... and you'll never know if THEY CAN BE PRECIOUS or not by attracting enough critical mass of end users with the little improvements that they could render.

AND MORE much more... as my example in the previous post, Binary modules hooking can be very useful even to OSS developers... specially if they are for very specific targets of very **small size** things in nature...

OBVIOUSLY it cames to mind the *lowest level* of device drivers (something like DKMS could had made wonders in the official tree... never mind if most everybody else is using it but the official ones... and so many flames about integrating stuff that has little to none use... go figure!!), as also very specific targets for GCC back end!??... right !??... i may be ignorant of GCC intestines, i'm not a developer, but i could imagine the laugh if i suggest a monolithic approach to the development of "codec" plugins... so the rant is not technical, its a question of more and better logic for a *natural* implementation against partizan and corporative group interests about its self preservation instincts and dominance eagerness.

Only the quality of GCC has prevented it from becoming irrelevant... and with LLVM framework being adopted by Apple, and other frameworks that will pop out eventually with *INNOVATIONS* i can't see the FSF doing the best defense or innovating enough to stay relevant enough...

So perhaps my caps lock is only a reflection of that very clear trend:
clew; - certainly isn't all those *concerns* about locking things out that prevented GCC all this years from being forked into oblivion...

Developments in the GCC world

Posted Mar 29, 2009 3:15 UTC (Sun) by mmarq (guest, #2332) [Link]

"" nothing stops Microsoft hacking GCC to export GIMPLE or RTL to disk and read it back ineven now. But it's annoying and hard enough that nobody nefarious has done that. If we do it for them, we have to ensure that the nefarious types who might be attracted to it don't get a chance to do nefarious things, while still allowing the nifty stuff.) ""

Who told you that M$ hasen't done that already ???...

About the rest i believe that there is nothing you or FSF can do to prevent M$, and others, from internally forking GCC to have much better results in their BSD and other Open Source "service servers", because they cooperate heavily with HW vendors...

MUCH BETTER results than you or RMS could achieve with GCC in implementing open source servers, because FSF don't has = actually it locks them out = the level of cooperation with HW the vendors for all those details as M$ has... ain't life a bitch !?

FSF and you try to keep them out, but they will fork you to stay in... and have much better results in the process, because they go where you refuse to go !!

The danger is not nefarious types!... the danger is a clear shot in the foot !!

Developments in the GCC world

Posted Mar 29, 2009 3:22 UTC (Sun) by mmarq (guest, #2332) [Link]

"" About the rest i believe that there is nothing you or FSF can do to prevent M$, and others, from internally forking GCC to have much better results in their BSD and other Open Source "service servers" ""

should read :

"" ... from internally forking GCC, ** adding tons of nifty proprietary stuff that they don't share with nobody else **, to have much better results in their implementations of BSD and other Open Source "service servers" ""

Developments in the GCC world

Posted Mar 29, 2009 13:52 UTC (Sun) by nix (subscriber, #2304) [Link]

Yes, they could potentially do that iff they added appropriate
serialization and deserialization layers (the proprietary stuff couldn't
be in GCC itself, but could be in a program that talked to GCC.)

But *this is exactly why the new runtime library license matters*.

Developments in the GCC world

Posted Mar 31, 2009 19:46 UTC (Tue) by mmarq (guest, #2332) [Link]

"" But *this is exactly why the new runtime library license matters*. ""

No, you are missing the point completely. MS is not distributing any GCC or any software compiled with GCC. They most probably are using it *internally* only for compile installation of their open source servers... for hot/Livemail, for the search engine... perhaps for more other things.

THE ONLY LICENSE THAT CAN PREVENT THEM IS A CLOSED SOURCE LICENSE

Isn't that ironic !??

MORE... they don't have to serialize nothing outside of GCC, they can actually change the code... fork it, is a better expression, and share nothing with nobody because they use it only internally. Can i or you pick up a GPL program and change it at will, but keep the changes for home only ??... isn't that the first attribute of the GPL ??

What ever license or provisions you make outside of the FIRST and most fundamental attribute of the GPL, that is, having the source code available, is completely orthogonal to them. To them the new license change for the runtime doesn't matter a bit. The same to more than 99% of *end users*... matter of fact small *end users* don't read licenses and the big shops have lawyers...

So it matters only for the developers and distributors. But those without end users is like a car without petrol, it doesn't go anywhere very fast...
And that is what will happen by 3th part disinvestment in GCC, if they see themselfs too much blocked by draconian measures. Its like the old phrase that too much love can kill you... also too much GPL... GCC is closing on itself!

I like and defended the GPL3, it was and is an important legal/political tool of pressure. It started really worrying M$ about the future, in the very likely eventuality of OSS being better code than them, and worst, the license had not only run the outside of patents for activism as M$ expected, but had embrace&extended them with a share and share alike spirit. The most important weapon of M$ against OSS started to fade that day... it tends to became irrelevant now... they haven't launched the doomsday bomb while they could, because IBM and Novel and others assured they will go with the blast also by furnishing the OSS patent war chest... and GPL 3 tends to coopt and neutralize things further... so that is why they want to be friends now.

To conclude, the most important quality about OSS is quality of code not the license, though the license can be important also. But without quality not even nefarious guys care about the code.




Developments in the GCC world

Posted Mar 31, 2009 20:10 UTC (Tue) by nix (subscriber, #2304) [Link]

MS can do whatever they want with GCC if they don't distribute it, and
they are entirely within their rights to do so. The purpose of the new
license is not to stop MS doing that; it's to make it harder for someone
(probably not MS) to distribute a compiler which consists of a GCC
frontend doing nearly all the work, a GIMPLE serializer (probably in the
form of the existing lto code), a proprietary backend or middle-end, and
possibly a GCC backend again.

Developments in the GCC world

Posted Apr 2, 2009 15:50 UTC (Thu) by mmarq (guest, #2332) [Link]

Yes i suspected

Its against the likes of LLVM...

But what a crazyness !!!!... the power of GCC is in its development team and the quality of code... Why don't they adopt someting like or why not the very LLVM backend, that is among other things, being adopted by Gallium3D for every graphx cards device drivers developemnt ????

If GCC team would do it, in 10 years LLVM would be completely irrelevant. The other way around they have Apple support and most probably Nvidia and AMD/ATI, and is... sorry for the caps:

IT IS GCC THAT IS GOING TO BECAME TOTALLY IRRELEVANT IN 10 YEARS TIME IF THEY CANNOT COOPT THE LEVEL OF INVOLVEMNET OF THE 3D ***FUSION*** INDUSTRY, THAT LLVM SEEMS TO BE GETTING NOW.

So why fight LLVM ?... its crazy... Does GCC suffer from deep NIH trauma ??
OR could they came out in short time with something better than LLVM backends, enabling it to get the 3D / FUSION industry attention away from LLVM...

The front end is not the most important or difficult to implement, so it seems to me, nonetheless LLVM are about to finish Clang front end among other things... but why the civil war ??... there is no hardened license that is going to stop LLVM... and if they prove themselfs better in most important aspect than GCC, not only by the support of heavy weight industry players, GCC can count *eventually* on Linux kernel jumping sides among other big projects... and then its the end.


Developments in the GCC world

Posted Apr 2, 2009 16:09 UTC (Thu) by foom (subscriber, #14868) [Link]

> Yes i suspected
> Its against the likes of LLVM...

No, it's really not. The LLVM license is GPL-compatible. So there's no barrier to using LLVM as a
plugin to GCC, as long as you treat the LLVM component as if it were also GPL'd.

Also: please use fewer capital letters and less extra punctuation next time.

Developments in the GCC world

Posted Apr 2, 2009 18:33 UTC (Thu) by nix (subscriber, #2304) [Link]

'Fight' LLVM? They're not 'fighting' LLVM. The two projects share several
developers.

The only person who thinks the license change is targetted against LLVM is
you.

Developments in the GCC world

Posted Mar 29, 2009 13:51 UTC (Sun) by nix (subscriber, #2304) [Link]

Perhaps MS *have* hacked GCC to serialize and deserialize GIMPLE, but if
they have, they haven't distributed the result, nor has anyone from MS
even once made comments on the GCC list that suggest that they're doing
so.

Also, why would they bother? They have their own compilers.

(And, er, the GCC team's members cooperate heavily with hardware vendors,
and always have: most of the new targets are paid for by hardware vendors,
and always have been. Many GCC contributors *work* for hardware vendors.
Look up how CodeSourcery makes its money sometime, and how Cygnus used to.

So, yes, hardware vendors could choose to stop working directly with GCC
hackers and start working with MS on a forked GCC --- but why on earth
would they want to do anything of the kind? Out of random undifferentiated
evil? It brings only disadvantages to them.)

Developments in the GCC world

Posted Mar 31, 2009 20:53 UTC (Tue) by mmarq (guest, #2332) [Link]

"" So, yes, hardware vendors could choose to stop working directly with GCC
hackers and start working with MS on a forked GCC --- but why on earth
would they want to do anything of the kind? Out of random undifferentiated
evil? It brings only disadvantages to them.)""

I don't speak for nobody but it seems obvious. Brand new HW that cost them (HW vendors) millions to develop, they might not want to give any hint to the competition about a small number of innovative features, and that way maintain any eventual competitive advantage. That is why in general the OSS world has a considerable gap about support off new HW stuff compared with the windows world.

Yes i believe the HW vendors want to cooperate heavily with OSS.... OSS is so natural to them(HW vendors). Device drivers where the first model of "freeware" ever, long before OSS...

(( and in this forum i heard someone explaining to me as an end user, that if i wanted support for a piece HW fast, i should hire a couple of developers... what a business model:), what was traditionally *freeware* i must pay now, tons of server features that i don't need or use ( as an end user) are free now and sometimes i must drag them around !!??? ))

... and if only the HW vendors had a way to **temporally** (because things became obsolete fast) hide the most relevant ***SMALL** secrets of their investments, by dynamically loading binary modules into the "GENERAL" frameworks of importance (kernel, GCC others ?) they could not only close the gap, but also invert the situation to the point that when a major new thing cames out, it cames out for Linux/OSS first... and also again they could make the performance of stuff depended on hardware be better on OSS than in anywhere else.

I can ear ppl scream NO... its OK... they and about everybody else that might have doubts or are outright YES, are going to wait whatever may until those things became obsolete anyway... only then they might appear on a OSS dev tree... ha!.. and a much less number of developers hired by HW vendors will be involved in OSS at any time, no matter how much and how often *things* are released... that is why many releasing of stuff never includes full support, or full HW vendor involvement.

And the caricaturisation of the situation is that DKMS already offers this to a good extend for Linux kernel, the possibility of dynamic insertion of **SMALL** binary modules, it needs improvements not redesign and it needs official support... for GCC something on the same line could be wonderful for the best results...

That is what i call REAL cooperation...

Developments in the GCC world

Posted Mar 31, 2009 22:50 UTC (Tue) by nix (subscriber, #2304) [Link]

I don't speak for nobody but it seems obvious. Brand new HW that cost them (HW vendors) millions to develop, they might not want to give any hint to the competition about a small number of innovative features, and that way maintain any eventual competitive advantage. That is why in general the OSS world has a considerable gap about support off new HW stuff compared with the windows world.
Since the late 1980s, long before free software made it big, Cygnus was making money doing GCC and binutils ports to new platforms before those platforms were released, under NDA, giving the code to the platform's hardware guys (who paid them) then putting it into GCC et al a few months later.

The only difference between that and what happens now is that the hardware manufacturers often do it themselves (or get someone like CodeSourcery to do it for them as used to happen with Cygnus).

AIUI, GCC and binutils could generate code for amd64 before amd64 existed in silicon. Free software support for 64-bit AMD/Intel CPUs long predated proprietary support from anyone but Intel's compilers. Certainly MS was beaten to the punch (by as much as a year?)

Giving away free compilers for your new hardware is an absolute no-brainer. It encourages developers to move to your platform and helps sell it. With the notable exception of nvidia (who are selling a graphics card more than a hardware platform as such), everyone does it. ARM, Intel, AMD, Motorola, everyone.

So your claim that they'd refuse to provide info to free compiler hackers is contradicted by decades of established fact. Sorry!

Developments in the GCC world

Posted Apr 2, 2009 16:28 UTC (Thu) by mmarq (guest, #2332) [Link]

"" So your claim that they'd refuse to provide info to free compiler hackers is contradicted by decades of established fact. Sorry! ""

Am I ????.... really ???.... I agree that the situation is substancially better than it was... but why Intel to promote their HW, in those benchmark PR sessions always use ICC... and AMD sometimes Portland based GCC and in many times MS based compilers... why is GCC only referenced by SPEC benchmarks for Unix like jobs anyway...

Linux vs windows driver contest... for the last year gives

http://www.google.pt/search?as_q=Linux+vs+windows+driver+...

Ok for things that "Unix-like" OSes excelled from time ago (*throughput*) Linux is better, but IBM, HP and or SUN server HW is little important in the much bigger all_over_the _world picture... and specially for many chipsets and graphx cards the practice dosen't seem to agree with you... worst, matchups are done with closed source stuff for the Linux camp, compiled with GCC ok, but generally lagging in performance for the windows conterpart...

But what of the weck am i trying to prove!?... if you are right and i am wrong, as you say "nix"... why in the weck is so much ppl in OSS involved in reverse engeniring things???... isn't it HW related ???... i never heard about reverse engineering MS web server or something!...


Developments in the GCC world

Posted Apr 2, 2009 18:37 UTC (Thu) by nix (subscriber, #2304) [Link]

The components that are reverse engineered are those whose manufacturers
have not seen the light and started to provide documentation under
reasonable licenses.

However, pretty much none of those components are the sort of
general-purpose thing one could sensibly target a compiler to (GPUs are
the only exception I know of, and even there AMD/ATI is doing the right
thing).

Sigh

Posted Mar 26, 2009 8:53 UTC (Thu) by renox (guest, #23785) [Link]

I find it very disturbing that a release is delayed for such petty reason: this use of GCC as a plugin is a new feature, this feature isn't ready because of licensing issue? Just disable this feature in the current release.
I consider this is a lack of respect for the other developers which worked on GCC.

Sigh

Posted Mar 27, 2009 0:54 UTC (Fri) by nix (subscriber, #2304) [Link]

It's sillier than that. Plugins are *not in* the current release, were
never planned to be in the current release, and will never be in any
release on that branch.

So why hold off the branch? I don't know. Does anyone?


Copyright © 2009, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds