|
|
Subscribe / Log in / New account

Courgette meets a dangerous (Red) Bend

This article brought to you by LWN subscribers

Subscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, please buy a subscription and make the next set of articles possible.

By Jonathan Corbet
November 2, 2009
Back in July, your editor stumbled across Google's Courgette announcement and promptly added it to the LWN topic slush pile. He then promptly let it sit for three months or so. The news that this software is now the subject of a patent suit brought Courgette back to the foreground; here we'll look at what Courgette is for, how it works, and how it relates to the patent being asserted.

As most LWN readers will know, Google is working on its own web browser, called Chrome. The Chrome developers seem to be focusing on speed, but they are also clearly putting significant thought into the security of the browser. That is a good thing: web browsers are a large, complex body of code which are directly exposed to whatever a web server might choose to throw at them. The complexity makes security-related bugs inevitable; the exposure makes them highly exploitable. Chrome's developers have come to the conclusion that, when security problems are found, they must be fixed as quickly as possible.

Prompt patching of bugs requires that they be identified and repaired as quickly as possible. But the repairs are not useful unless they get to the browser's users - all of them, or as close to that as possible. The Chrome developers worried that the sheer size of browser updates would make that goal harder to achieve. Massive updates take longer to download and install, are more likely to be interrupted in the middle, and greatly increase the strain on server bandwidth. Pushing out a fix for a severe zero-day problem might even tax the bandwidth resources of a company like Google, leaving users exposed for longer than they should be.

If the size of browser updates could be reduced significantly, it should become possible to update far more systems in less time. After looking at various ways to compress patches, the Chrome developers decided to create their own algorithm; the result was Courgette. This algorithm is based on the key observation that small changes at the source level tend to cascade into big changes in binary code; by taking a small step back toward the source, many of those changes can be abstracted back out.

In particular, Courgette tries to eliminate irrelevant changes to static pointers. Consider a simple example:

        if (some_condition)
	    goto error_exit;

	/* ... */
    error_exit:
	return -EYOULOSE;

As the program is built, error_exit turns into a specific location in the code. An irrelevant change elsewhere in the file can cause the location of error_exit to change; that, in turn, will change the final compiled form of the goto line even though that line has not changed. That changed address looks like a difference in the binary file; when this happens thousands of times over, the binary patch will become severely bloated.

Courgette works by finding static pointers in the code and turning them back into something that looks like a symbolic identifier. The new identifiers are generated in a way that ensures that they do not change if the underlying code has not changed. New versions of the binary (both before and after patching) are built using the replaced pointers; these reworked binaries can then be compared with a utility like bsdiff. Since addresses with unimportant changes have been replaced with consistent identifiers, the two binaries should be a lot closer to each other and the resulting diff should be much smaller.

How much smaller? In an example cited on chromium.org, a full update weighed in at some 10MB. Using bsdiff (which already shrinks binary diffs considerably) yielded a 700KB change, already a significant improvement. With Courgette, though, the diff is 78,848 bytes. In other words, the size of the update has been dropped to less than that of the unpleasant flash ad which probably decorates this article. That seems like an improvement worth having. It also seems like a technology that projects like deltarpm (which is bsdiff-based at its core) might want to take a look at.

Enter Red Bend Software and patent #6,546,552. For the curious, here is the first independent claim from that patent:

A method for generating a compact difference result between an old executable program and a new executable program; each program including reference entries that contain reference that refer to other entries in the program; the method comprising the steps of:
(a) scanning the old program and for substantially each reference entry perform steps that include:
(i) replacing the reference of said entry by a distinct label mark, whereby a modified old program is generated;
(b) scanning the new program and for substantially each reference entry perform steps that include:
(i) replacing the reference of said entry by a distinct label mark, whereby a modified new program is generated;
(c) generating said difference result utilizing directly or indirectly at least said modified old program and modified new program.

Even for patentese, this language tends toward the impenetrable. But once one realizes that "reference entries that contain reference that refer to other entries" means "addresses," it starts to become a little clearer. To your editor's overtly non-lawyerly, not-legal-advice reading, this claim does appear to describe what Courgette is doing.

Google is not dealing with a typical patent troll here; Red Bend is a company which manages over-the-air firmware updates for mobile carriers. The patent was applied for in 1999, and granted in 2003. This company may well be in a position to tell a sob story where its bread-and-butter patent is being stepped on by Google - a company which is now getting into the business of supplying firmware for mobile phones. On its face, this could certainly be made to look like just the sort of situation the patent system was created to deal with.

Of course, there may be prior art which invalidates this patent. But Google may well find that it's cheaper and easier to just settle with Red Bend, especially if, as Richard Cauley argues, the amount of the settlement could be quite small. Defeating a patent in court is a lengthy, expensive, and risky enterprise; it would not be surprising if Google decided that it had better things to do. The real question, in that case, is what sort of terms Google would negotiate. If Google takes a page from the Red Hat playbook, it will seek to get this patent licensed for all free software implementations. That outcome would remove this patent from consideration in the free software community and keep Courgette free software. A back-room deal with undisclosed terms, instead, could leave this useful technique unavailable for the next ten years.


(Log in to post comments)

Courgette meets a dangerous (Red) Bend

Posted Nov 2, 2009 21:12 UTC (Mon) by AlexHudson (guest, #41828) [Link]

Really don't see Google licensing this in the way Red Hat have managed in the past. Would be a nice surprise.

Personally, I think this is a bad patent - there's no amazing technical solution here; if you ask the right question this solution kind of pops out automatically. Patents are supposed to protect ideas for ways of solving problems people recognise as being very tough / impossible.

Courgette meets a dangerous (Red) Bend

Posted Nov 2, 2009 22:41 UTC (Mon) by riddochc (guest, #43) [Link]

I think an argument can be made that any invention becomes obvious if you can ask the right question to prompt it and understand the basic prerequisite knowledge.

There are likely a large number of patents in the world that aren't obvious to me, but I attribute that to the fact that (for example) no material-science related patent is obvious to me, because I'm not a chemist. But show me a software patent, and it will seem obvious to me. Thus, the "rule" that patents shouldn't be obvious to the typical practitioner with the relevant experience.

Sometimes the interesting part of an invention isn't the process used (the content of the patent) but simply the problem it's trying to solve. The solution may be obvious, but realizing the fact that the problem existed in the first place is an insight in and of itself.

This particular patent seems to me pretty obvious, once you start thinking about how to do compression: Compression relies on finding repeated patterns in data, and abstracting away that information - that's obvious. The result of compiling code after making changes is an object file with systematic changes. Those changes can involve pointer address changes - and this is obvious to anyone who programs much in C. If the typical C programmer were asked to make a compression program for executable patches, this seems to me that looking into systematic changes like pointer addresses is not just obvious, it's the low-hanging fruit.

Nearly every software patent I've heard of is either an analogue of an obvious, everyday process in the physical world, or the description of a mathematical algorithm in terms of software. Simply adding "...with a computer" to an obvious, ordinary process doesn't make it any less obvious. The patent system would be made consistent with itself either by removing the exclusion of mathematical algorithms from patentability (a bad idea, in my opinion) or by adding an exclusion for software-related patents.

Courgette meets a dangerous (Red) Bend

Posted Nov 2, 2009 22:58 UTC (Mon) by proski (subscriber, #104) [Link]

It's actually a good question for a job interview. How would you implement a binary patch algorithm? What would you do to reduce the patch size? I imagine a good applicant would come with a similar idea, perhaps without implementation details.

Of course, it will take some effort to implement and validate the idea, but it should be much easier than validating e.g. a drug or a rocket engine.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 8:58 UTC (Tue) by mjthayer (guest, #39183) [Link]

> It's actually a good question for a job interview. How would you implement a binary patch algorithm? What would you do to reduce the patch size? I imagine a good applicant would come with a similar idea, perhaps without implementation details.

Or for the patent office? Like finding twenty people "skilled in the field" with no knowledge of the patent, and ask them to sketch a solution to the problem with fifteen minutes time at their disposal?

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 13:40 UTC (Tue) by gbutler69 (guest, #54063) [Link]

Ask 100 RANDOM software developers that have a college degree and 5 years of experience for
a solution to this problem. I highly doubt more than 3 or 4 would come up with this idea in a
reasonable amount of time. No, it is not OBVIOUS. If you think it is, you are either lying to
yourself, or you are the 3 or 4 % exceptional of the 100 who would come up with this idea.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 14:49 UTC (Tue) by mjthayer (guest, #39183) [Link]

>Ask 100 RANDOM software developers that have a college degree and 5 years of experience for a solution to this problem. I highly doubt more than 3 or 4 would come up with this idea in a reasonable amount of time.
Wouldn't that be an answer as well? I can't say that I'm a fan of patents in general (software or otherwise), but at least it would be a more honest test of whether it fits within the current rules.

Courgette meets a dangerous (Red) Bend

Posted Nov 4, 2009 11:47 UTC (Wed) by ebiederm (subscriber, #35028) [Link]

My reaction when I read about Courgette the first time was interesting someone got around to implementing that.

The usage in Courgette also seems to be an independent reinvention.

Perhaps I have missed some detail but this seems like a battle over
the obvious from where I stand.

Courgette meets a dangerous (Red) Bend

Posted Nov 5, 2009 6:51 UTC (Thu) by nevyn (subscriber, #33129) [Link]

Maybe true, but a better question is if only 3-4 people came up with the idea (out of your "random" 100) ... does that qualify it for being patentable? Is the bar of "obvious" really so low that you could realistically expect 1000s of people to be able to independently invent it at google/MS/IBM/etc.

Yes.

Posted Nov 5, 2009 13:16 UTC (Thu) by gbutler69 (guest, #54063) [Link]

Obvious to me would be at least 30% - 50% of those polled skilled in the art would come up
with the idea and method in a reasonable amount of time. Everything is Obvious in hind-sight.
Think about even simple things like the derivation of the quadratic equation. Once you know it,
it is obvious. For *most* people though, it is not obvious until shown. Now, admittedly, this is not
the kind of thing that is patentable (mathematical algorithms being specifically excluded from
patent protection) but, it does demonstrate the point I'm trying to make.

Frankly, I think the only problem with patents is that they are awarded for too long (at least in
the case of software patents). The time should be much shorter for many industries because
those industries move so fast.

Patent Law, like all other laws, is nothing but an agreement between all the members of the
society. If you don't like the law, work to change it. In fact, if you don't like my definition of
obvious, then feel free to lobby and work for a more codified definition that fits what you think
would be fair. All you have to do is get enough people to agree with you (not an easy task for
even the simplest things).

Honestly, we all must learn to take more control over the laws of our society instead of just
bitching about them. If you can convince enough people you are right, you can win.

Sticking your fingers in your ears and yelling at the top of your lungs, "NA NA NA NA" won't
really accomplish much (don't take that as an accusation against you or anyone else
personally - just a statement against the general attitude displayed by many people).

I know that I often find myself frustrate, angry, and even bordering on wanting to lash out
violently against some of the nonsensical crap that goes on legally that is FAR FAR more
onerous than Patent Law. That being said, I have to remind myself that all I need to do is
attempt to persuade more like-minded (and not so like-minded people) to see things my way
and the law can be changed. Easier said than done.

Besides, in 5 or 10 years we're going to run out of oil and then the whole world-wide economy
will collapse and we will be back to chucking spears at each other in no time flat. So, all of this
kind of stuff is just a distraction anyway.

Re: Yes.

Posted Nov 6, 2009 5:15 UTC (Fri) by nevyn (subscriber, #33129) [Link]

Obvious to me would be at least 30% - 50% of those polled skilled in the art would come up with the idea and method in a reasonable amount of time.

Well, personally, I would disagree ... if 3-4% of a random sample in an "art" would come up with the idea, when asked to solve a problem, then I fail to see why 1 person/entity should be able to stop upto 9 million people from using it (assuming every .us person could be taught said "art", that'd be 9 million).

Patents aren't supposed to be a lottery, they are supposed to solve the problem of sharing when only a very small number of people would ever be able to solve a problem. Thus it's worth giving you a monopoly as an incentive to share. When you are anywhere close to 1%, you are saying at least 1 person in any public company could find the solution ... that's just not rare, IMO.

Of course atm. patents seem more like if less than 99% of a random sample would come up with it, then it's patentable. So it's all academic.

Honestly, we all must learn to take more control over the laws of our society instead of just bitching about them. If you can convince enough people you are right, you can win.

Do you have any examples of that working?

Re: Yes.

Posted Nov 6, 2009 13:59 UTC (Fri) by gbutler69 (guest, #54063) [Link]

The Civil Rights movement.

Re: Yes.

Posted Nov 6, 2009 14:01 UTC (Fri) by gbutler69 (guest, #54063) [Link]

The Civil Rights movement.

Courgette meets a dangerous (Red) Bend

Posted Nov 12, 2009 17:54 UTC (Thu) by tbrownaw (guest, #45457) [Link]

Ask 100 RANDOM software developers that have a college degree and 5 years of experience for a solution to this problem. I highly doubt more than 3 or 4 would come up with this idea in a reasonable amount of time. No, it is not OBVIOUS.

Would the solution to FizzBuzz be obvious under this test? Or how to expand (from=1/1/2000; to=1/3/2000) into [(when=1/1/2000), (when=1/2/2000), (when=1/3/2000)] with SQL? This second one is actually an interview question here, I'd guess the candidates we get (with degrees and mostly 5+ (some supposedly 15+) years experience) have a (probably low) single-digit percent hit rate. Does this mean that a solution should be patentable?

Mathematical algorithms

Posted Nov 2, 2009 23:37 UTC (Mon) by man_ls (guest, #15091) [Link]

I think an argument can be made that any invention becomes obvious if you can ask the right question to prompt it and understand the basic prerequisite knowledge.
When I hear this argument about obvious patents I always think about the FFT algorithm, which seems to me the most unintuitive algorithm of all times. But it is after all a mathematical algorithm, and these should not be patentable -- not any more than physical laws.

Today I learned on Wikipedia that the prior art for the FFT goes back to Gauss in 1805. Go figure.

Mathematical algorithms

Posted Nov 3, 2009 4:13 UTC (Tue) by JoeBuck (subscriber, #2330) [Link]

Gauss discovered the FFT in 1805, so I think any patent would have expired by now. :-)

Mathematical algorithms

Posted Nov 3, 2009 7:31 UTC (Tue) by man_ls (guest, #15091) [Link]

OTOH I think the prior art would still be valid today :D You might as well want to patent the Eratosthenes sieve to find out prime numbers.

Mathematical algorithms

Posted Nov 3, 2009 18:30 UTC (Tue) by Trelane (subscriber, #56877) [Link]

No, no, no. Did he discover it _on_a_computer_?! If not, then it's clearly novel!

Mathematical algorithms

Posted Nov 4, 2009 10:00 UTC (Wed) by Kluge (subscriber, #2881) [Link]

Actually, according to that link, Gauss didn't publish the algorithm (like so much of what he did), so by my understanding of patent law, it doesn't count as prior art.

Mathematical algorithms

Posted Nov 9, 2009 15:48 UTC (Mon) by gmaxwell (guest, #30048) [Link]

I see your hope and raise you 6,859,816, claim 1 of which appears to read on odd-radix Cooley—Tukey.

Of course— if that is what this patent is actually doing, it is patently invalid without a shred of hope at being enforceable. But it does show that the presence of a clear description of your algorithm in the patent database doesn't mean much without a fair amount of costly analysis.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 1:03 UTC (Tue) by ikm (subscriber, #493) [Link]

Well, the thing with chemistry (and drugs in particular) is that they do require a lot of tests and clinical trials (and hence a lot of money and time) to prove they do actually work with the side effects all being known and well-studied. As you can see, this isn't true for software. Software patents is total cheating. Like, *total*. They are ALL obvious.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 3:34 UTC (Tue) by drag (guest, #31333) [Link]

You can't copyright drugs, unlike software. So the laws that restrict the
use of drugs only lasts a few years and the trade off of the patent is that
it becomes public domain invention.

While the patent system is probably abused in the case of drugs, having the
drugs patentable actually makes sense. It's a physical item, not protected
by other laws, has to deal with real-world engineering and physics, actually
takes a huge amount of effort and expense to create. It probably needs
reform, but in the long-run everybody benefits.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 3:40 UTC (Tue) by ikm (subscriber, #493) [Link]

Well, that actually was my point, too. Patents clearly work for drugs. But not for software (bummer!:)

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 11:45 UTC (Tue) by niner (subscriber, #26151) [Link]

No, they don't even work for drugs. And they are not needed there either.
The pharmaceutical industry spends a multitude of it's R&D investments on
advertising while basic research nowadays is nearly completely state
funded and happens at universities.

It's just a fairy tale that pharma companies would stop doing the little
R&D that they have to do anyway to get products out if they couldn't have
patent protection. They would just hurt their own business more than they
would save.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 11:57 UTC (Tue) by ikm (subscriber, #493) [Link]

But if basic research is really state-funded and done at universities, I don't see where patents come to play at all.

Courgette meets a dangerous (Red) Bend

Posted Nov 13, 2009 14:14 UTC (Fri) by cowsandmilk (guest, #55475) [Link]

ever heard of Bayh-Dole? http://en.wikipedia.org/wiki/Bayh-Dole

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 12:02 UTC (Tue) by csigler (subscriber, #1224) [Link]

> the little R&D that they have to do anyway

I'm sorry, but any credibility you may have had was trashed by these, your own words.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 13:43 UTC (Tue) by niner (subscriber, #26151) [Link]

Can you tell me how those words could do that? As a non-native speaker of
English I may just have used the wrong words.

Or maybe I shortened my argument too much. I meant, that those companies
have to do some R&D to make a usable product out of the results of the
basic research that happens at the universities. And what they often do is
just rebalancing the amount of active ingredients and sell the result as a
new product.

The point is: yes, they collectively spend some 100 Million USD on R&D
every year. But you have to compare that to the Billions they spend on
advertising alone.

Even if there were no patent protection anymore for their research
results, they would still have a timing advantage over their competitors
that may potentially copy their products. Competitors would still have to
analyze the drugs, find the ingredients, get it licensed so they be able
to sell it and get production going.
For sure that timing advantage would be worth the not even 12% of these
companies' budgets that they spend on R&D.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 16:30 UTC (Tue) by csigler (subscriber, #1224) [Link]

Your claim is grossly exaggerated. This publication:

http://www.sciencedaily.com/releases/2008/01/080105140107...

quotes a York University study. Their work shows pharmaceutical companies spend almost twice as much on advertising as R&D. Frankly, I would expect that ratio to be reversed, as high-tech R&D costs are, well, high. (I have first-hand experience, having worked in similar R&D and piloting operations in years past.)

Your extravagant claim is "$100 million" for R&D and "billions" for advertising, which is a ratio of 1:20 or less. The NYU study says the ratio is not even as low as 1:2. This is why your statements lack credibility. They don't match up with published facts.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 16:32 UTC (Tue) by csigler (subscriber, #1224) [Link]

Sorry, my error. For "NYU" above, substitute "York University."

Big Pharma

Posted Nov 3, 2009 16:31 UTC (Tue) by tialaramex (subscriber, #21167) [Link]

Do you have a source for those two figures you've quoted? Because the last I remember it was suggested that R&D spend is about the same as advertising, still a long way from "poor us, we spend every cent on R&D and now you want to take away our patents" but a lot more than pretty much any other industry.

The idea that all the real work is done "at the universities" may be superficially correct, but it involves a bit of sleight of hand. You've gone from counting how much is spent /by/ the companies to whether work is done /at/ the universities. In practice drug research _at_ universities is heavily funded by these same big pharmaceutical companies. If they decided to cut that spending, you'd see big job losses.

In fact this is a major topic of discussion. If the researchers were funded by government (= higher taxes to pay for it) they'd have no reason to do some of the dubious things they do today to ensure they keep their funding from big pharmaceutical companies. For example, deciding not to publish results from an experiment which shows no difference between an old drug and a new drug. Or changing the measured outcome of a controlled trial after the data is collected, in order to have a positive result rather than an equivocal one.

Big Pharma

Posted Nov 4, 2009 10:17 UTC (Wed) by Kluge (subscriber, #2881) [Link]

As far as I can tell, the vast majority of university research is funded by the government, especially basic research. The numbers might be different if you're talking about clinical trials on drugs.

I'm not sure about the specific abuses you're referring to; certainly universities have generally failed to establish sufficiently strict codes of conduct regarding grants and contracts with pharma. I believe that's changing, though.

Big Pharma never actually cures anyone

Posted Nov 5, 2009 9:46 UTC (Thu) by pflugstad (subscriber, #224) [Link]

As an aside, my other impression with Big Pharma is that the tend to spend virtually ALL their R&D on products that don't actually cure anything, but rather, put the user on an continuous dose program (think Lipitor, etc). While drugs like Lipitor may actually help out quite a bit, they really seem to be treating the symptoms and not the underlying disease. So there is actually a negative incentive to actually do research into a drug that might cure the underlying disease (almost an inventors dilemma). For diseases that actually could be cured, there's zero non-govt/university research in it because there IS NO MONEY IN IT.

At which point, it becomes obvious that Big Pharma has zero incentive to actually cure anyone of anything. So at this point, it's pretty clear to me that anything Big Pharma says about patents and how much they spend on IR&D is totally irrelevant and frankly, disingenuous.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 18:17 UTC (Tue) by blitzkrieg3 (guest, #57873) [Link]

When you have a number of companies patenting the very chemicals that make up our DNA as they discover them just because they _might_ turn out to be drugs, I believe you have a failure of the patent system.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 10:03 UTC (Tue) by AlexHudson (guest, #41828) [Link]

I'm not sure I'm in total agreement with what you said, but I do agree that hindsight is a particularly poor judge of these matters. What I meant by "asking the right question", though, was that many software patents tend to be pre-empted by the requirements set forth by the inventors. In many ways, it ends up not causing huge issues because there tend to be many ways of skinning a cat, but I'm not sure it's easily dealt with in the current system.

As an aside, I can think of one patent which I think of as being quite clever: the selective quantization method used in MPEG encoding (e.g. MP3) to compress data without impacting the resulting sound. I think it's a nice tie-up between mathematical theory and biological reality, and it's a useful contribution to our knowledge of the physical world (the fact you pretty much have to do it in software is neither here nor there for me).

Courgette meets a dangerous (Red) Bend

Posted Nov 5, 2009 18:48 UTC (Thu) by jrigg (guest, #30848) [Link]

> As an aside, I can think of one patent which I think of as being quite
> clever: the selective quantization method used in MPEG encoding (e.g. MP3)
> to compress data without impacting the resulting sound.

It might be clever but it most definitely affects the sound. Only lossless compression algorithms (eg. FLAC) have no effect on sound quality. Sorry if this is OT, but as an audio engineer I find this misconception a little irritating.

Courgette meets a dangerous (Red) Bend

Posted Nov 9, 2009 16:12 UTC (Mon) by gmaxwell (guest, #30048) [Link]

In an absolute sense digitization also "affects the sound".

But— you protest— that well executed digitization with sufficient bit-depth and sampling captures the information with precision exceeding the noise floor and band-pass of human hearing, and that under carefully conducted double-blind testing even the best listeners can not discern a difference.

Quite right.

But the same is true of lossy compression: At high enough rates with well well enough done methods it exceeds the limits of human perception and produces results which are not ABX-able. It's often not well done, it often is used at fairly low rates, and because it uses more sophisticated techniques it can fail in subtler ways...

This is relevant because your model of lossless=perfect, lossy=bad brings about unreasonable conclusions. Which would produce a more accurate experience: Stereo lossless audio or surround sound at the same (high) bitrate using lossy compression?

Pulling this back on topic, perceptually weighed quantization *is* obvious to a practitioner in the art, or at least it has a very long history of incremental development stemming back to the early vocoder speech crypto devices from the WWII era, the weighing filters used on analog telephone lines and for analog noise shaping (dolby a).

Like in many other areas the underlying technology needed for MP3 existed for a long time before computers became so stupidly fast that what would have seemed like a joke (executing 152 288point complex/complex FFT's per second for the MDCTs in MP3) became completely reasonable. The same is true for some 'recent' innovation in asymptotically optimal error correcting codes.



Courgette meets a dangerous (Red) Bend

Posted Nov 12, 2009 19:06 UTC (Thu) by jrigg (guest, #30848) [Link]

> At high enough rates with well well enough done methods it exceeds the limits of human perception and produces results which are not ABX-able.

I have yet to hear such a thing from mp3, but I agree it would be possible using a good enough method.

> This is relevant because your model of lossless=perfect, lossy=bad brings about unreasonable conclusions.

Actually my model for compression is: audible=bad, inaudible=good.

It can be argued that the principle of perceptually weighted compression is obvious to a practitioner in the art, but then I think the same applies to many patents. In practice the criterion often seems to be whether or not it is obvious to the patent examiner.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 16:16 UTC (Tue) by NAR (subscriber, #1313) [Link]

I think an argument can be made that any invention becomes obvious if you can ask the right question to prompt it and understand the basic prerequisite knowledge.

My experience with "software research" was that the emphasis was on asking the right questions, that was the hard part - the actual implementation is mostly straight forward. The problem with patents is that in some cases they cover the actual question, not just the implementation, so they rule out any other implementation for the same problem.

On the other hand, why does Google bother with this hacking? Wouldn't it be easier to split the exectuable into many DLLs, so if there's a bugfix, only the changed DLL should be delivered? For example the JavaScript engine could be separated, the image displaying libraries (libpng, etc.) could be separated, etc.

granting of patents for obvious solutions to non-obvious problems

Posted Nov 5, 2009 12:10 UTC (Thu) by pjm (guest, #2080) [Link]

> Sometimes the interesting part of an invention isn't the process used (the content of the patent) but simply the problem it's trying to solve. The solution may be obvious, but realizing the fact that the problem existed in the first place is an insight in and of itself.

Such a realization may be “an insight”, and the result may in some cases be a useful contribution to society, but I believe the pertinent question is rather to characterize whether granting such patents is likely to advance or hinder the art (or whether it's likely to improve or worsen society).

In the case of the patent discussed in the article, it appears that the idea was independently conceived of, put into use, and publicized, without the motivation or other help of patents. The only argument I can think of that patents might have helped advance the art in this case would be to argue that Red Bend Software employees were funded in part by the hope that they would be granted this patent, and that they would not have been funded if the threshold for granting patents were such that patents were not granted for obvious solutions to non-obvious problems, and thus that the idea would have been delayed by a couple of years in its application. I don't find this argument very convincing. Perhaps someone more knowledgable can offer testament or evidence of this, or perhaps someone else can provide a better argument (though likely this isn't a good forum to seek one).

Similarly, in the case of the previous patent where I heard this argument made (viz. Amazon's “one-click” patent), my impression as an ignorant outsider is that Amazon would have conceived of, implemented and popularized the idea whether or not it was patentable.

(Two obvious arguments that granting such patents hinders the industry are that doing so incurs significant legal costs, and often hinders rather than promotes application of the idea.)

granting of patents for obvious solutions to non-obvious problems

Posted Nov 5, 2009 19:02 UTC (Thu) by dark (guest, #8483) [Link]

The original argument for the patent system is that it promotes publication. Solutions are made public rather than kept as trade secrets. Obviously, obvious solutions to non-obvious problems don't need this mechanism at all, since it will not be possible to keep the solution secret once the problem is known.

It's borderline-obvious

Posted Nov 12, 2009 10:34 UTC (Thu) by edmundo (guest, #616) [Link]

I would expect at least some interview candidates to come up with this solution to the problem.

The general principle of "transform your data into a different form which standard compression algorithms can handle better" is very well known. For example, if you want to compress text that contains a lot capitalisation you might want to first transform it like this:

some lower case AND SOME UPPER CASE lower again
->
some lower case <cap>and some upper case</cap> lower again

Then apply some standard compression algorithm, which can now spot the repeated words "some" and "case". I don't know whether this transformation works in practice, but it's an obvious thing to try, and I would claim that the Courgette transformation is only slightly less obvious.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 11:02 UTC (Tue) by epa (subscriber, #39769) [Link]

Dividing software patents into 'good' and 'bad' is not really a game worth playing. The legal system or the patent office cannot make such value judgements. They can only deal with clear rules such as 'programs for computers are not patentable', and sometimes manage to mess up even that (see the EPO).

What is a good patent

Posted Nov 4, 2009 21:39 UTC (Wed) by rvfh (guest, #31018) [Link]

The best patent is not one that solves an obscure problem in a complicated way, quite the opposite! The best patent is simple, and solves a very common problem in the best ever way, so that everybody that has the problem (and that's a lot of people, as I said it is a very common problem) really wants to license the patent rather than using a free, inadequate solution.

Heh, the penny just dropped...

Posted Nov 2, 2009 22:45 UTC (Mon) by JamesErik (subscriber, #17417) [Link]

"Courgette" is "squash" in French. It's the "squash" you eat but plays on the other meaning of "squeeze" or "compress". Kind of amusing...

Heh, the penny just dropped...

Posted Nov 3, 2009 11:08 UTC (Tue) by ms (subscriber, #41272) [Link]

Yeah, it's neat. Us English use Courgette for the same vegetable that Americans call Zucchini. I've no idea whether the French actually use it.

Heh, the penny just dropped...

Posted Nov 3, 2009 11:56 UTC (Tue) by patrick_g (subscriber, #44470) [Link]

Heh, the penny just dropped...

Posted Nov 14, 2009 22:14 UTC (Sat) by jch (guest, #51929) [Link]

Heh. Amusing indeed.

> "Courgette" is "squash" in French.

Just to be pedantic -- « courgette » is courgette (zucchini on the other side of the pond). Squash is « courge ».

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 13:59 UTC (Tue) by skvidal (guest, #3094) [Link]

When I read what this patent is about it really seems to me the kind of thing that makes sense to patent. It is a good bit obscure, it isn't (to me) obvious and it seems to definitely be innovation.

I think my objection to patenting software is not the patent process, nor what it is intended to protect, I think these days it is the term of patent. The playing field evolves too quickly for a 20yr span.

For this patent, for example, if the term had been 7 years since it was granted would we be as upset about it?

I'm not sure I would.

Courgette meets a dangerous (Red) Bend

Posted Nov 5, 2009 8:00 UTC (Thu) by dwmw2 (subscriber, #2063) [Link]

"When I read what this patent is about it really seems to me the kind of thing that makes sense to patent. It is a good bit obscure, it isn't (to me) obvious and it seems to definitely be innovation."
I strongly disagree.

You start with a bunch of .o files. All but one of them is identical to the old version. The delta is small.

You link your .o files into an executable. The linker does its thing with all the relocations. The delta between old and new executable is now huge.

The thought process goes... "Oh, what changed? All those relocations? Well, they're not real information; they can be reproduced. Let's knock something up that lets us omit those from the delta we send over the wire..."

It seems entirely obvious to me; the build process is giving you a bloody great hint. It doesn't take a huge leap of intuition; you just need to be thinking about the problem coherently.

Personally, I would love to see a private criminal prosecution for fraud, against the "inventors" named on this patent and others like it. The patent system is broken, and the patent office is complicit in its brokenness. The best way forward is to use something outside that system, in my opinion.

Courgette meets a dangerous (Red) Bend

Posted Nov 5, 2009 19:27 UTC (Thu) by magnus (subscriber, #34778) [Link]

No it doesn't start with a bunch of .o files.

It starts with a practical problem (updates are too large), then thinking about the type of code changes that typically make up an update, then thinking about the relocations etc you mention, then coming up with this idea, then recognizing it's worth doing, then realizing the idea.

If it's so obvious, why aren't there any open source implementations? Why aren't automatic updates handled this way in Linux distros or commercial OS:es?

Courgette meets a dangerous (Red) Bend

Posted Nov 5, 2009 19:50 UTC (Thu) by dwmw2 (subscriber, #2063) [Link]

Remember, the patent doesn't cover the implementation. It's just the idea. However much work goes into the implementation, what's being protected by the patent is just the basic concept.

And you do start with .o files, every day. The relocation thing is staring you in the face every time you do a build. When you come up with the 'modified program' mentioned in this update process, you're just stepping backwards a step in the build process. You're not actually doing anything particularly new and exciting.

There's a very similar trick with compression. Packages (RPM/deb/etc.) are generally compressed, and when doing deltas on them it's useful to work on the uncompressed version rather than the compressed version. Otherwise you get a lot of unnecessary differences between the old and the new version. Small changes cascade into big changes in the compressed package.

And, in answer to your final question, I have no idea why people aren't doing either of the above. The deltarpm stuff which just went into Fedora is still operating on the compressed packages without bothering to optimise it, as far as I know.

It seems like people just don't care that much about how efficient the update process is, at least for Linux distributions. Why else would we still be using yum?

Courgette meets a dangerous (Red) Bend

Posted Nov 5, 2009 20:43 UTC (Thu) by skvidal (guest, #3094) [Link]

As compared to what? I keep waiting for the solution to shipping a lot of bits, often, to come trotting up but nothing has leapt into view. yum-presto has helped matters but it does eat up a good bit of cpu-time.

Courgette meets a dangerous (Red) Bend

Posted Nov 7, 2009 3:39 UTC (Sat) by magnus (subscriber, #34778) [Link]

When you come up with the 'modified program' mentioned in this update process, you're just stepping backwards a step in the build process. You're not actually doing anything particularly new and exciting.
I agree that stepping back in the build process by itself is not very interesting. If they had just diffed the .o files and linked at the other end, I would have agreed completely with you.

The interesting part is the concept of feeding linker relocation info as hints into a more generic binary delta/compression algorithm. This allows for relatively simple decompression/patching at the other side, and also for proprietary software it doesn't require you to expose too much of the code's internals.

Courgette meets a dangerous (Red) Bend

Posted Nov 24, 2009 22:57 UTC (Tue) by hozelda (guest, #19341) [Link]

>> If it's so obvious, why aren't there any open source implementations? Why aren't automatic updates handled this way in Linux distros or commercial OS:es?

Assuming you are correct... probably because there are more important things to worry about as a function of the resources available to tackle them.

The day this becomes a dominating issue, very good solutions will be developed independently. [ http://www.againstmonopoly.org/index.php?perm=59305600000... ]

The patent is likely unconstitutional (illegal) because it almost surely does not promote the progress of science and useful arts.

Search and replace?

Posted Nov 3, 2009 15:11 UTC (Tue) by southey (guest, #9466) [Link]

Noting that it says 'executable program' (whatever that means), then creating a diff/patch after searching and replacing on some executable shell script must be covered by the patent:
$ sed "s/old/new/g" old > new | diff old new

Probably could cover some of those './configure and make' programs as well.

Courgette meets a dangerous (Red) Bend

Posted Nov 3, 2009 19:39 UTC (Tue) by intgr (subscriber, #39733) [Link]

You have one too many "they" in the article:
Prompt patching of bugs requires they they be identified and repaired as quickly as possible
PS: what is the best way to report grammar errors/typos? Should I add them as comments? Or send an email? Or should I not bother at all?

Reporting typos

Posted Nov 3, 2009 19:43 UTC (Tue) by corbet (editor, #1) [Link]

Email to lwn@lwn.net is the preferred way; most LWN readers are unlikely to be interested in reading typo comments.

Reporting typos

Posted Nov 5, 2009 11:09 UTC (Thu) by alex (subscriber, #1355) [Link]

I have to wonder what the actual ratio corrections via email vs comments is for LWN?

Reporting typos

Posted Nov 5, 2009 13:30 UTC (Thu) by corbet (editor, #1) [Link]

We don't track it, but we get a lot more email corrections than posted ones.

Courgette meets a dangerous (Red) Bend

Posted Nov 6, 2009 15:25 UTC (Fri) by mrdoghead (guest, #61360) [Link]

This looks like another case of a patent being granted based on a very general outline of an idea that is itself no more than a direct application of logic. All software patents devolve to the latter, but not all are presented in such a burdensomely vague way. Which makes this instance so egregious an example of misapplied patent law and such an unconvincing complaint. It may be a matter Google prefers to settle, but each new case of this sort reinforces my view that software patents make no sense and do a substantial disservice to society as a whole.

Courgette meets a dangerous (Red) Bend

Posted Nov 6, 2009 16:22 UTC (Fri) by etienne_lorrain@yahoo.fr (guest, #38022) [Link]

It may be possible for the .rpm/.deb package to contain objects files instead of linked applications, so that the linker is called as part of the installation (linking is most of the time quite quick).
By keeping for each installed package the original object files, a diff would be quite small.

Just an idea to rebuild the world differently...

Courgette meets a dangerous (Red) Bend

Posted Nov 8, 2009 10:51 UTC (Sun) by sfink (subscriber, #6405) [Link]

After reading that claim, I wonder if this cat could be usefully skinned a different way: first,
augment your compression code to optionally store a map of words that should be stored in
separate
contiguous chunks and woven back together during decompression. Second, allow the map to
be derived from the data itself, using a format that matches ELF. And probably third, use word-
based arithmetic deltas for the words in the separated-out group.

The series of static values should be very compressible since the same label will be used
repeatedly (and if it is stored as an offset from the current Instruction pointer, then the
differencing will take care of it.)

This does not achieve as good compression as the strict label-based approach, but it is more
general and perhaps bypasses the patent.

Apologies if this is hard to follow. I am partly using this to test whether I can usefully do text
entry on this iPod touch, and the results are mixed.

Courgette meets a dangerous (Red) Bend

Posted Nov 12, 2009 12:23 UTC (Thu) by forthy (guest, #1525) [Link]

To me, a maybe non-infringing technique is fairly obvious. The real problem is that your program is something like this:

labela:
...
call labelc
...
labelb:
...
insert new code here
...
delete code there
...
labelc:
...
jump labela
...
call labelb

When you insert or delete code, the jumps/calls over these insertions and deletions change deltas. So what do you need to do? Your binary diff already tells you where to insert and delete code, i.e. where you change the size of the whole thing. So keep track of that, and adjust all the jump/call offsets according to the size change map. No need to convert something into symbols and back - just search for instructions 0xE8, 0xE9, check to which region the following address points to, and adjust accordingly to insertion and deletion.

So the algorithm is in total:

  1. Scan the instructions and write zero into the address fields of jumps and calls into a copy of both executables
  2. Make a binary diff between the executables where jumps and calls are nulled out
  3. Apply the patch+adjust jumps and calls on the original binary, creating another temporary copy
  4. binary diff this temporary copy with the new executable

For x86/x64, marking the jumps and calls requires some limited disassembler (to determine instruction boundaries), for RISC architectures, it's even easier. The idea why I think this might be non- together, and not as individual symbols. This is not obvious from the patent claim, and it might have actual down-sides (e.g. if your patch replaces calls to function a with calls to function b, the symbolic version can compress that effectively, while my algorithm won't).

Courgette meets a dangerous (Red) Bend

Posted Nov 8, 2009 15:52 UTC (Sun) by ayalone (guest, #21387) [Link]

Google will probably buy RedBand, as It will not only settle this suit but also will give Google some very nice IP in the software update area, which a Cellular software wannabe as Google will appreciate greatly.

Courgette meets a dangerous (Red) Bend

Posted Nov 24, 2009 7:37 UTC (Tue) by NinjaSeg (guest, #33460) [Link]

This sounds an awful lot like the preprocessor used in Microsoft's CAB files, and probably any other decent executable packer, that converts relative jumps to absolute ones. Prior art? (UPX? Automation? Atomik?)


Copyright © 2009, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds