|
|
Subscribe / Log in / New account

Reducing HTTP latency with SPDY

Benefits for LWN subscribers

The primary benefit from subscribing to LWN is helping to keep us publishing, but, beyond that, subscribers get immediate access to all site content and access to a number of extra site features. Please sign up today!

November 18, 2009

This article was contributed by Nathan Willis

Google unveiled an experimental open source project in early November aimed at reducing web site load times. SPDY, as it is called, is a modification to HTTP designed to target specific, real-world latency issues without altering GET, POST, or any other request semantics, and without requiring changes to page content or network infrastructure. It does this by implementing request prioritization, stream multiplexing, and header compression. Results from tests on a SPDY-enabled Chrome and a SPDY web server show a reduction in load times of up to 60%.

SPDY is part of Google's "Let's make the web faster" initiative that also includes projects targeting JavaScript speed, performance benchmarking, and analysis tools. Mike Belshe and Roberto Peon announced SPDY on November 11 on both the Chromium and Google Research blogs, noting that "HTTP is an elegantly simple protocol that emerged as a web standard in 1996 after a series of experiments. HTTP has served the web incredibly well. We want to continue building on the web's tradition of experimentation and optimization, to further support the evolution of websites and browsers."

Finding the latency in HTTP

The SPDY white paper details the group's analysis of web latency, beginning with the observation that although page requests and responses rely on both HTTP as the application-layer protocol and TCP as the transport-layer protocol, it would be infeasible to implement changes to TCP. Experimenting on HTTP, on the other hand, requires only a compliant browser and server and can be tested on real network conditions.

The group found four factors to be HTTP's biggest sources of latency. First, relying on a single request per HTTP connection makes inefficient use of the TCP channel and forces browsers to open multiple HTTP connections to send requests, adding overhead. Second, the size of uncompressed HTTP headers, which comprise a significant portion of HTTP traffic because of the large number of HTTP requests in a single page. Third, the sending of redundant headers — such as User-Agent and Host — that remain the same for a session. Finally, the exclusive reliance on the client to initiate all HTTP requests, when there are cases where the server knows that related content will be requested, but cannot push it to the client.

SPDY tackles these weaknesses by multiplexing an unlimited number of concurrent streams over a single TCP connection, by allowing the client to assign priorities to HTTP requests in order to avert channel congestion, and by compacting HTTP request and response headers with gzip compression and omitting the redundant transmission of headers. The SPDY draft specification also includes options for servers to initiate content delivery. The available methods are "server push," in which the server initiates transmission of a resource via an X-Associated-Content header, and "server hint," in which the server only suggests related resources to the client with X-Subresources.

In addition, SPDY is designed to run on top of SSL, because the team decided it was wiser to build security into its implementation now than to add it later. Also, because SPDY requires agents to support gzip compression for headers, it compresses the HTTP data with gzip too.

The important thing to note is that SPDY's changes affect only the manner in which data is sent over the wire between the client and the server; there are no changes to the existing HTTP protocol that a web page owner would notice. Thus, SPDY is not a replacement for HTTP so much as a set of possible enhancements to it.

Comments on the blog posts indicate that although most readers see the value in header compression and request prioritization, some are skeptical of the need to multiplex HTTP requests over a single TCP connection. Other alternatives have been tried in the past, notably HTTP pipelining and the Stream Control Transmission Protocol (SCTP).

The white paper addresses both. SCTP, it says, is a transport-layer protocol designed to replace TCP, and although it may offer some improvements, it would not fix the problems with HTTP itself, which SPDY attempts to do. Implementing SCTP would also require large changes to client and server networking stacks and web infrastructure. The latter is also true for similar transport-layer solutions like Structured Stream Transport (SST), intermediate-layer solutions like MUX, and HTTP-replacements like Blocks Extensible Exchange Protocol (BEEP).

The problem with pipelining, it says, is that even when multiple requests are pipelined into one HTTP connection, the entire connection remains first-in-first-out, so a lost packet or delay in processing one request results in the delay of every subsequent request in the pipeline. On top of that, HTTP pipelining is difficult for web proxies to implement, and remains disabled by default in most browsers. The fully multiplexed approach taken by SPDY, however, allows multiple HTTP requests and responses to be interleaved in any order, more efficiently filling the TCP channel. A lost packet would still be retransmitted, but other requests could continue to be filled without pausing to wait for it. A request that requires server-side processing would form a bottleneck in an HTTP pipeline, but SPDY can continue to answer requests for static data over the channel while the server works on the slower request.

Implementation and test results

The development team wrote a SPDY web server and added client support in a branch of the Chrome browser, then ran tests serving up "top 100" web site content over simulated DSL and cable home Internet connections. The test included SSL and non-SSL runs, single-domain and multiple-domain runs, and server push and server hint runs. The resulting page load times were smaller in every case, ranging from 27.93% to 63.53% lower.

The team's stated goal is a 50% reduction in load time; the average of the published tests in all of their variations is 48.76%. Though it calls the initial results promising, the team also lists several problems — starting with the lack of well-understood models for real world packet loss behavior.

SPDY remains an experiment, however, and the team solicits input on a number of open questions, including dealing with the latency introduced by SSL handshakes, recovering from a lost TCP connection, and how best to implement the server-side logic to truly take advantage of server push and server hint. Interested people are encouraged to join the mailing list and download the code.

So far, only the modified Chrome client code is available, and that from the public Subversion repository, not binary downloads. Peon said that the server release is coming soon, and the project page says that the test suite and benchmarking code used in Google's test will be released under an open source license as well.

A 50% reduction in page load times is nothing to sneer at, particularly when all of the gains come from tweaking HTTP's connection and data transfer behavior. Header compression alone gives noticeable savings; the white paper states that it resulted in an "~88% reduction in the size of request headers and an ~85% reduction in the size of response headers." The future of the web may indeed include new protocols like SCTP and BEEP, but SPDY is already demonstrating that there is plenty of room for improvement without drastically altering the protocol stack.


Index entries for this article
GuestArticlesWillis, Nathan


(Log in to post comments)

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 18:09 UTC (Wed) by BrucePerens (guest, #2510) [Link]

The future of the web may indeed include new protocols like SCTP and BEEP

I'd guess that BEEP is 10 years old now.

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 23:06 UTC (Wed) by dps (guest, #5725) [Link]

Last time I looked at BEEP it was incredibly verbose and looked painful to implement. Apparently the designers thought at XML was a good idea and the verbosity was not an issue. That was in relation to alternatives to venerable, slim and simple syslogd protocol.

I personally think that the added verbosity and complexity of BEEP would outweigh any gains that might be obtained by sending multiple results. In short I do not think resurrecting BEEP is a good idea.

If you do want to resurrect something then RDP might make sense. RDP has most of the features of TCP except for the in-order delivery feature and was intended for bulk data transfer. Currently RDP implementations are very hard to find.

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 0:39 UTC (Thu) by samroberts (subscriber, #46749) [Link]

> Last time I looked at BEEP it was incredibly verbose

Beep framing overhead looks like this. You call this incredibly verbose? Compared to HTTP?

MSG 0 1 . 52 120
[the msg contents]
END

XML shows up only briefly during channel setup, and is quite minimal:

I: MSG 0 1 . 52 115
I: Content-Type: application/beep+xml
I:
I: <start number='2'>
I: <profile uri='http://iana.org/beep/FOO' />
I: </start>
I: END

Transporting HTTP over BEEP would allow multiple, simultaneous, bi-directional HTTP requests in parallel, multiplexing use of the same TCP connection.

I don't think it will happen, but I don't think SPDY will either :-)

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 12:32 UTC (Thu) by jzbiciak (guest, #5246) [Link]

I can see one way SPDY happens: Google supports it on all their servers, and pushes support for it into Chrome/Chromium and, crucially, their mobile browser in Android. Throw on top of that a Google-operated caching proxy service that Android users can use (as transparently as possible, of course!) so that browsing the Internet from your mobile suddenly got 2x - 3x faster without having to upgrade to 4G or what-have-you, and I think you'll see a lot of pressure for others to support SPDY.

Meanwhile, Google laughs all the way to the bank monetizing the extensive marketing data they've collected from these proxies about what people are actually browsing on their Android phones.

Or, am I just extra cynical in the morning?

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 12:54 UTC (Thu) by michel (subscriber, #10186) [Link]

I would not describe that as cynical, but rather as an interesting strategy well in line with Google's business model

Reducing HTTP latency with SPDY

Posted Nov 27, 2009 0:12 UTC (Fri) by deleteme (guest, #49633) [Link]

I assure you that those logs are available from many big phone companies.

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 18:53 UTC (Wed) by kjp (guest, #39639) [Link]

Their comment about packet loss makes no sense. In either HTTP pipelining or SPDY, a lost packet stalls the TCP stream... there is nothing SPDY can do to stop that. Now reordering response frames, i.e. images vs a long running cgi page, is clearly a good idea compared to http pipelining.

I like their 'everything is a lowercase key property bag' concept for requests. I don't like the multiple ways to shutdown a stream (set a flag, or send a 0 length frame). That part of the spec needs pruning.

HTTP pipelining does suck for proxies. (I develop one). I ended up supporting it from the client but I don't pipeline up to the server for compatibility, management, and giving myself a headache reasons. It's most annoying because certain magic error codes or conditions kill the whole pipeline connection and you have to start over. Also, for management simply asking what are the current connections and their requests gets more complicated since its many to many... of course with SPDY SSL the proxies won't even know what requests are occuring underneath so in that case it's simpler :)

I also don't know how they figure a server push function doesn't change HTTP/Ajax level apis... sounds like a brand new api to me.

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 19:01 UTC (Wed) by elanthis (guest, #6227) [Link]

Lost packets aren't a problem for regular http multiplexing because browsers just open
multiple connections to he server to download all the page resources. If one image stalls
from lost packets it has no effect on the other images and css files and such being
downloaded simultaneouslly. If the downloads are pipelines in a single connection then any
stall will affect all resources and not just one.

I suppose a question to answer is how often a single connection out of many between two
endpoints stalls while the other connections do not. Most times I see excessive packet loss
it's for the entire connection, not just a single connected socket. That's not a particularly big
sample size though. :)

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 19:20 UTC (Wed) by knobunc (guest, #4678) [Link]

I think the question is how SPDY is any better than pipelining when encountering packet loss when both are built on TCP, and it is TCP that handles the re-transmits when packets are lost.

All I could find in the article is:
* SPDY sends ~40% fewer packets than HTTP, which means fewer packets affected by loss.
* SPDY uses fewer TCP connections, which means few changes to lose the SYN packet. In many TCP implementations, this delay is disproportionately expensive (up to 3 s).
* SPDY's more efficient use of TCP usually triggers TCP's fast retransmit instead of using retransmit timers.

But that is a comparison of plain (non-pipelined) HTTP to SPDY.

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 11:36 UTC (Thu) by v13 (guest, #42355) [Link]

And HTTP/1.1 always uses pipelining, so it is a comparison of HTTP/1.0 with
SPDY.

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 13:22 UTC (Thu) by knobunc (guest, #4678) [Link]

Sadly pipelining is off on most browsers.

Summarized from http://en.wikipedia.org/wiki/HTTP_pipelining
* IE8 - no
* Firefox 3 - yes, but disabled by default
* Camino - same as FF3
* Konq 2.0 - yes, but disabled by default
* Opera - yes AND enabled by default
* Chrome - not believed to support it, certainly not enabled

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 14:26 UTC (Thu) by v13 (guest, #42355) [Link]

The konqueror line is obsolete since konqueror 2.0 is from kde 2.0. I just
tested 4.3.2 and it uses pipelining.

Firefox OTOH doesn't (just tested it). The bad thing about Firefox is that
it opens multiple connections but keeps each connection alive after the data
are transmitted (!). What a misuse of resources!

However, the support is there and all HTTP/1.1 servers support it. AFAIK,
only akamai servers don't support keepalives (they support HTTP/1.0 only).

Reducing HTTP latency with SPDY

Posted Nov 22, 2009 16:38 UTC (Sun) by ibukanov (subscriber, #3942) [Link]

> * Opera - yes AND enabled by default

Yet according to Opera engineers that was not easy. Even after many years of having that enabled by default they still has to tweak their blacklisting database to add new entry disabling the pipelining. If they would know in advance the pain they may not even implement it.

Reducing HTTP latency with SPDY

Posted Nov 27, 2009 0:21 UTC (Fri) by efexis (guest, #26355) [Link]

It doesn't, HTTP can only pipeline when the size of the incoming response is known before hand (eg, through sending a Content-length: header at the beginning of the response). Without knowing how big the response is going to be, it doesn't know when the response ends and the next one begins, so the server has to close the TCP connection to let it know it's done.

Reducing HTTP latency with SPDY

Posted Nov 27, 2009 1:15 UTC (Fri) by mp (subscriber, #5615) [Link]

Not necessarily. There is also the chunked encoding.

Reducing HTTP latency with SPDY

Posted Nov 28, 2009 18:41 UTC (Sat) by efexis (guest, #26355) [Link]

Chunked transfer does also send the size first, that way the other side knows when the chunk it's been receiving has come to an end and the header for the next has begun, the primary difference just being that the message size becomes independant of the document size, but while it may be defined in the HTTP/1.1 spec, it's not mandated... end-to-end support is required for it, and there's a -wide- range of proxy servers out there, at the personal, corporate, and ISP level, transparent and explicit, all to cause problems, not to mention personal firewall/anti-virus software that perhaps can't complete its job until it has the whole document, so being party to chunked transfers isn't going to be so high on the developers list of priorities.

None of these are particularly massive hurdles, but it's still the state of things even in HTTP/1.1 land, so the potential for improvement is very real, and being something that's most important to Google's business, there could actually be some pressure behind it.

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 23:21 UTC (Wed) by njs (guest, #40338) [Link]

> I also don't know how they figure a server push function doesn't change HTTP/Ajax level apis... sounds like a brand new api to me.

I was assuming that server-pushed resources just got shoved straight to the cache, so that when the browser got around to looking for them (e.g., because it parsed the HTML page and found the <img> tags), they were already available. That logic works fine for Ajax-style usage too. (But probably isn't as important there, since you can already write an Ajax client that fetches all the needed resources bundled together in a single request and then unpacks them.)

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 23:53 UTC (Wed) by martinfick (subscriber, #4455) [Link]

How does the server know that the client doesn't already have this pushed data cached?

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 1:05 UTC (Thu) by njs (guest, #40338) [Link]

Ask them? :-) From skimming the docs, it looks like it's a combination of: 1) they may ask the client to tell them whether it's visited this site before in the request headers, and only enable server-push on initial page loads (obviously there are a lot of important details to be worked out here regarding what an "initial page load" is!), 2) the client tells the server how much free bandwidth it's willing to waste on such speculative activities, and the server uses that to throttle the amount of possibly-redundant pushing.

But I didn't see a lot of detail; this seems to be an early still-to-be-fleshed-out draft.

Reducing HTTP latency with SPDY

Posted Nov 20, 2009 20:37 UTC (Fri) by pphaneuf (guest, #23480) [Link]

I think that functionality is optional, and is indeed something that a web site developer would have to take into account. But for the rest, SPDY looks like it's just something that would make your site faster if both the client and the server support it, sounds like a win to me.

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 0:46 UTC (Thu) by BrucePerens (guest, #2510) [Link]

That seemed odd to me too. You can't use a TCP stream socket to avoid waiting for packet redelivery or to accept out-of-order delivery without waiting. You'd need to use a datagram service.

Reducing HTTP latency with SPDY

Posted Nov 27, 2009 0:42 UTC (Fri) by efexis (guest, #26355) [Link]

Nope :-) The problem with delayed pipelined connections is that the requests are still dealt with one at a time, just without the connection closing in between them. This means a delay of 200ms getting the first request flushed to the client means a 200ms delay in beginning to process the second request. But, with a multiplexed connection, the second request can have been sent before the network delay occurs, and so the server can begin dealing with that request and queueing the packets for sending straight away... it may even be able to get them out the door and to the client, which means that as soon as the client gets the retransmitted packet that got lost, it can piece them all together and pass it into the applications memory, without having to spend as much time waiting for the packets that would have followed it otherwise. Remember that buffers may be in-order at both ends, but that doesn't mean that transmission must be also :-)

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 10:25 UTC (Thu) by cate (subscriber, #1359) [Link]

Yes. But it seems that packet losses reduce a lot more the speed of page of
HTTP than in SPDY. But IIRC they don't really understand the cause (other
than less packets, so less total lost packets).

OTOH modern websites have both static and dynamic contents. Pipeline usually
works in reading the main page (dynamic) and than the static contents (css,
images). With multiple parallel pipeline (in SPDY), when webserver is waiting
for the
dynamic contents, part of the static content could already be delivered to
user).

Sender vs. Recipient latency

Posted Nov 18, 2009 19:39 UTC (Wed) by smurf (subscriber, #17840) [Link]

IMHO multiplexing on top of TCP just shifts the blame, but I'm sceptical that it's a really optiomal solution.

Let a protocol that's designed to do so (SCTP) fix the streaming and multiplexing shortcomings of TCP, and let's fix the HTTP problems (like non-compressed chatty headers) in HTTP. Two separate problems. Don't conflate them into one "solution".

YES you'd need to substantially rewrite your server (and, though probably to a much lesser extent, client) if you want them to understand SCTP. So what? You'll have to do the exact same thing if you want to multiplex over HTTP. In fact, with SCTP the kernel does the multiplexing for you, so modifying Apache (or any other Web server) to use separate SCTP streams may well be _easier_ than to teach it to multiplex multiple contents over one TCP stream, which needs to be done in user space.

Sender vs. Recipient latency

Posted Nov 18, 2009 23:03 UTC (Wed) by iabervon (subscriber, #722) [Link]

The problem with SCTP isn't that you have to upgrade your server or that you have to upgrade your client, it's that you have to upgrade your cable modem, your firewall, your VPN, etc. Of course, a lot of the routing stuff can treat SCTP just like TCP, but they have to know how to do it. Also, if your system offloads some of the TCP calculations to the NIC, you might need the NIC to have SCTP offload in order to get the same performance.

Google's goal is probably something like making Gmail on Android phones fast. Cell networks are probably the biggest current common case where the latency impedes making full use of the available bandwidth. They're also a case where third-party hardware is doing NAT, and Verizon isn't going to run out and replace their expensive equipment to help Google.

It's ultimately the right solution to the problem, but that doesn't mean that an interim solution isn't needed.

Routing

Posted Nov 19, 2009 5:14 UTC (Thu) by smurf (subscriber, #17840) [Link]

The whole thing will have to fallback to TCP when some firewall (NOT router!) is stupid. But supporting SCTP has to start somewhere, and that necessarily involves pressure from users. (Witness the ECN problem.)

I don't think multiplexing over TCP is a good interim solution. If people start to implement that, SCTP will never take off.

The question is, do you want a technically-sound solution (which probably involves IPv6: no NAT there), or a hack which will ultimately delay implementing that solution?

I suppose Google is all about hacks, at least in this area. Witness Android. :-P

But supporting SCTP has to start somewhere ? Why?

Posted Nov 19, 2009 5:55 UTC (Thu) by khim (subscriber, #9252) [Link]

The whole thing will have to fallback to TCP when some firewall (NOT router!) is stupid.

For most people out there router is this thing connected to cable modem. Even if it's technically not router, but firewall with NAT. If SCTP needs an update for that piece of plastic it's DOA and not worth talking about.

The question is, do you want a technically-sound solution (which probably involves IPv6: no NAT there), or a hack which will ultimately delay implementing that solution?

Wrong questions. The real question: do you want a solution or handwaving? If the "techically-sound solution" is ievitable (i.e.: other proposed solution either don't work at all or are just as invasive) - it has a chance. If there are some other solution which is worse but works with existing infrastructure... then "techically-sound solutoion" can be written off immediately.

I suppose Google is all about hacks, at least in this area. Witness Android. :-P

Yup. Witness system which works and is selling by millions (albeit by one-digit millions at this point) and compare it to "techically-sound solutions" which are scrapped and gone...

Google is about realistic solutions, not pipe-dreams. IPv6 is acceptable even if it has this stupid fascination with "technically-sound solution" approach - because there are things IPv4 just can't do. But STCP... I'm not sure it'll ever be used but I'm realistically sure it'll not be used in the next 10 years.

SCTP is the heir apparent

Posted Nov 20, 2009 23:18 UTC (Fri) by perennialmind (guest, #45817) [Link]

When it comes to the open internet, I agree that it'll be a long, long time before SCTP could become broadly feasible, but that's because you're talking about upgrading a massive network. New protocols are not born on the internet, not anymore. New network protocols breed in the crevices of the LAN, and SCTP has a bright future there. Some of the newer protocols like SIP, iSCSI, and NFSv4 will happily sit atop SCTP. If you're going out to fix the same problems that SCTP tackles, at the very least they should define a mapping, as those protocols do. We don't need to keep the kruft forever, but it has to be a gradual upgrade. Encapsulate as needed: SCTP has a reasonable UDP layering. Because "internet access" translates to TCP port 80 for so many, you may have to define something like SPDY, but in that case shouldn't it simply be the TCP variant, on a level with SCTP? Even if it does take ten years, twenty years, won't you want to be able to drop the inefficient backwards compatibility at some point?

Comcast is upgrading their gear to IPv6 because /they/ need it. With the multi-homing support in SCTP, you should be able to sell it to Verizon, AT&T, Sprint, etc as being genuinely useful to /them/. They have the unique position of both owning the (huge) proprietary networks all the way to the edge and actually making substantial use of those same networks, so they have both the ability and the business interests to adopt SCTP that random servers and clients do not. Just because SCTP isn't ready to supplant TCP for the web, doesn't diminish it's usefulness, right now.

SCTP is the heir apparent

Posted Nov 22, 2009 0:56 UTC (Sun) by khim (subscriber, #9252) [Link]

Even if it does take ten years, twenty years, won't you want to be able to drop the inefficient backwards compatibility at some point?

Is it really so inefficient? Is it really impossible to make things more efficient while retaining compatibility? Witness fate of Algol which decided to "drop inefficient backwards compatibility at some point" and compare it with Fortran which kept it around for decades. And the same story is with RISC and x86. And other countless examples. Compatibility is very important: it can only be dropped if there are no compatible way forward.

Comcast is upgrading their gear to IPv6 because /they/ need it.

Wrong emphasis. /They/ is irrelevant. /Need/ is imperative word.

With the multi-homing support in SCTP, you should be able to sell it to Verizon, AT&T, Sprint, etc as being genuinely useful to /them/.

You can try do this, but it's almost too late. They are losing their network and are becoming just "another ISP" (albeit big one). AOL already went this way, Verizon, AT&T, Sprint will follow. Sure, they'll try to delay it as much as possible, and may be even survive long enough for SCPT to become the whole article in history books, not just a footnote, but ultimately it's not a big difference.

But supporting SCTP has to start somewhere ? Why?

Posted Nov 25, 2009 14:27 UTC (Wed) by marcH (subscriber, #57642) [Link]

> For most people out there router is this thing connected to cable modem. Even if it's technically not router, but firewall with NAT.

<pedantic>Every NAT or firewall is technically some kind of router</pedantic>

More generally speaking,

I do not think any core network device blocks SCTP (nor anything else they do not recognize). So if two parties want to SCTP, they can by just reconfiguring their edge devices. Except for NATs, but NATs will disappear with the very near exhaustion of IPv4 addresses and the pressure of P2P applications.

You do not need the whole planet to be able to use a new protocol or service in order for it to get some traction. Zillions of people are forbidden to use facebook (and others...) at work. Does that dooms facebook?

But supporting SCTP has to start somewhere ? Why?

Posted Nov 25, 2009 15:36 UTC (Wed) by foom (subscriber, #14868) [Link]

> Except for NATs, but NATs will disappear with the very near exhaustion of IPv4 addresses
> and the pressure of P2P applications.

No they won't. Did you see the PR disaster Apple had when the Airport Express supported IPv6
without NAT? Everyone suddenly went "OMG my internal network is all exposed to the internet now,
giant security hole!!!". And of course, they were right -- that is unexpected behavior in today's
world. So, no doubt about it, NAT will live on even with IPv6. (When I say NAT there, I really mean
connection-tracking-based filtering: tacking the address translation on is trivial to do or not, but
it's the connection-tracking which would cause problems with SCTP).

But supporting SCTP has to start somewhere ? Why?

Posted Nov 25, 2009 18:57 UTC (Wed) by smurf (subscriber, #17840) [Link]

>> When I say NAT there, I really mean connection-tracking-based filtering

So why do you call it NAT, if no address is actually translated?

But supporting SCTP has to start somewhere ? Why?

Posted Nov 25, 2009 23:55 UTC (Wed) by foom (subscriber, #14868) [Link]

Because when people say "XXX is broken because of NAT", they actually mean "XXX is broken because of stateful connection tracking and filtering".

They just say "NAT" because stateful connection tracking and filtering is an integral part of NAT, and NAT is the most use. Of course it's possible to do a the connection-tracking without the address rewriting, but the important thing to note it is not any less complex, and causes no fewer problems.

It still prevents you from having an end-to-end internet.

You still want to have protocol-specific parsing in order to find "related" connections which should be allowed through. (e.g. with FTP). You'd still need a protocol like uPNP or NAT-PMP in order to advise the firewall to open a hole for things like BitTorrent. There's almost no advantage at that point versus actually having a NAT.

But supporting SCTP has to start somewhere ? Why?

Posted Nov 26, 2009 7:57 UTC (Thu) by smurf (subscriber, #17840) [Link]

>> There's almost no advantage at that point versus actually having a NAT.

Sure there is.

You avoid starving the router of TCP (or SCTP) ports. You avoid having to mangle TCP packets because they happen to contain addresses. You avoid IP address based "one-connection-per-client" limits on servers.

In short, you can use simpler servers and routers. Which translates to fewer bugs and less power-hungry CPUs.

But supporting SCTP has to start somewhere ? Why?

Posted Nov 25, 2009 23:45 UTC (Wed) by marcH (subscriber, #57642) [Link]

You are talking about default settings. I am talking about what is to become possible. Both are interesting, but quite different.

But supporting SCTP has to start somewhere ? Why?

Posted Nov 26, 2009 10:04 UTC (Thu) by marcH (subscriber, #57642) [Link]

> Except for NATs, but NATs will disappear...

Sorry, I actually meant:

So if two parties want to SCTP, they can by just reconfiguring their edge devices. Except when they have only one old public IPv4 address to share. But quite soon many people will have ZERO IPv4 address to share, which will ironically solve the only major deployment problem of SCTP.

Sender vs. Recipient latency

Posted Nov 20, 2009 13:36 UTC (Fri) by Tet (subscriber, #5433) [Link]

you might need the NIC to have SCTP offload in order to get the same performance

Performance is irrelevant here. Your bandwidth to the rest of the net isn't going to come anywhere close to the speed of your network card, even without offload engines.

Sender vs. Recipient latency

Posted Nov 23, 2009 15:37 UTC (Mon) by phiggins (guest, #5605) [Link]

Performance may be irrelevant to the client here, but for servers it is
always important. A place like Google would likely want/need to saturate a
NIC with SCTP.

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 19:44 UTC (Wed) by clugstj (subscriber, #4020) [Link]

I would like to see the results of just using gzip on all HTTP traffic. If this gets us most of the way there, then the other stuff is just a waste of time.

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 20:03 UTC (Wed) by fixkowalski (guest, #13396) [Link]

Hmmm, full http stream compression... that would bring benefits like several
percents of bandwidth loss when gzip-compressing video & images ;-)

advertisers will love this

Posted Nov 18, 2009 20:30 UTC (Wed) by dlang (guest, #313) [Link]

now instead of relying on the browser to request the advertisements (where things like adblock can run to eliminate these requests and the delays related to them) the server can just tell the browser that it's sending them.

advertisers will love this

Posted Nov 18, 2009 20:55 UTC (Wed) by rfunk (subscriber, #4054) [Link]

And of course advertising is Google's primary source of revenue.....

advertisers will love this

Posted Nov 18, 2009 21:14 UTC (Wed) by jake (editor, #205) [Link]

> the server can just tell the browser that it's sending them

sure, but that doesn't mean the browser has to *display* them.

jake

advertisers will love this

Posted Nov 18, 2009 21:47 UTC (Wed) by dlang (guest, #313) [Link]

true, but a large chunk of the problem with advertisements is the slowdown that they cause by just using up bandwidth.

since these are usually compressed to start with, nothing in this proposal will help with this. all that this can do is reduce the added overhead of making a request for the users that would view the advertisement anyway.

advertisers will love this

Posted Nov 18, 2009 23:18 UTC (Wed) by Los__D (guest, #15263) [Link]

Usually, what causes advertisements to suck the life out of the browsers, isn't bandwidth, but the adserver not answering, and the browser (for some ridiculous reason) deciding to wait for it.

advertisers will love this

Posted Nov 19, 2009 12:41 UTC (Thu) by jzbiciak (guest, #5246) [Link]

...which points out the other reason why this won't be as much of a problem as a couple posts up suggested: Ads quite often come from an entirely different server. For SPDY to prioritize or push ads ahead of other content, the ads and the content need to come from the same server, which would require some rearchitecting of the web.

Now, that said, if Google moves another step further, providing SPDY support to browsers through a SPDY-to-normal-HTTP caching proxy (as I suggested it might in this comment), then the ad-push concern returns.

advertisers will love this

Posted Nov 19, 2009 18:12 UTC (Thu) by Simetrical (guest, #53439) [Link]

The browser needs to wait on any <script> before it can render the rest of
the page, because the semantics of <script> have historically been
synchronous. <script>s need to be executed before later parts of the page
are rendered, or else you'll have unexpected bugs arising, because that's
not the way anyone has ever done it since <script> first existed.

Recent browsers are getting better at doing whatever they can in advance of
the actual script load -- fetching resources that they expect they'll need,
parsing, and so on. But I don't think any will actually render the rest of
the page before the script is done executing. What would happen if the
script did something like document.write("<!--")? Or if it redirected to
the different page? The user would have been shown something they were
never supposed to see according to the applicable standards.

This is one of the problems <script async> is meant to solve, incidentally,
but it won't really work for most ad scripts -- they tend to use
document.write() to output the ad. In principle you could use DOM methods
to insert the ad after the fact instead, of course. Then again, this would
mean the user could scroll right past the ad location without ever seeing
it . . .

I'm a web developer, though, and not a browser implementer, so take this
explanation with a grain of salt.

advertisers will love this

Posted Nov 27, 2009 1:03 UTC (Fri) by efexis (guest, #26355) [Link]

On your past point about DOM insertion - you can reserve space on a page and then later fill it, for many <embed>'ed ads like those in flash, this is usually already the way.

advertisers will love this

Posted Nov 20, 2009 10:25 UTC (Fri) by ballombe (subscriber, #9523) [Link]

> Usually, what causes advertisements to suck the life out of the browsers, isn't bandwidth, but the adserver not answering, and the browser (for some ridiculous reason) deciding to wait for it.

That is why my /etc/hosts carries
0.0.0.0 www.google-analytics.com
0.0.0.0 pagead2.googlesyndication.com
etc. with other adservers.

(not that google servers are the slowest to answer...)

advertisers will love this

Posted Nov 25, 2009 14:31 UTC (Wed) by marcH (subscriber, #57642) [Link]

> That is why my /etc/hosts carries
> 0.0.0.0 www.google-analytics.com
> 0.0.0.0 pagead2.googlesyndication.com
> etc. with other adservers.

Does that actually make a difference? I assumed adservers hide much better than that from AdBlock and others.

advertisers will love this

Posted Nov 18, 2009 23:23 UTC (Wed) by njs (guest, #40338) [Link]

Only if the advertisements are served from the same server as the main page, which IIRC is only true in a minority of cases. (And in particular, not true for Google ads.)

advertisers will love this

Posted Nov 20, 2009 20:41 UTC (Fri) by pphaneuf (guest, #23480) [Link]

I think this "push" function is mostly for stuff like implementing web mail and instant messenging services, the kind of stuff people use "hanging GETs" for today.

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 22:31 UTC (Wed) by diegows (guest, #14690) [Link]

SCTP is the right choice but it doesn't have flow control in each stream yet. I tested it some time ago and works very great in sattelite connections (where the latency is really high).

Reducing HTTP latency with SPDY

Posted Nov 20, 2009 12:43 UTC (Fri) by butlerm (subscriber, #13312) [Link]

SCTP has some other interesting problems with multiplexing large datagrams
- in particular it requires that the fragment sequence numbers (TSNs) for
each SCTP datagram be sequential.

That means that a typical SCTP implementation will transmit all the
fragments of a given datagram at least once before starting on the
fragments of any other datagram. In addition, a typical socket interface
to SCTP serializes the reception of datagrams as well. Once you have
started reading a datagram you cannot read anything else until that
datagram is finished.

This works really well if your datagrams are small, so that they consist of
a small number of fragments and are small enough to fit several in the
kernel level socket buffer. If either requirement is not met, SCTP will
serialize the datagram in much the same way as TCP.

As a consequence, an application layer multiplexing protocol like SPDY is a
practical necessity to gain the additional advantages of SCTP with a large
message oriented upper layer protocol like HTTP, the additional advantage
being non-serialization of small datagrams in the presence of packet loss.

SPDY and SCTP should be seen as complementary, rather than as independent
alternatives. That said the firewall / NAT problem with respect to SCTP is
serious enough that one should either use a variation of SCTP designed to
be layered over UDP, or something very similar, such that one effectively
has HTTP over SPDY over "SCTP" over UDP for optimal performance. HTTP over
SPDY over TCP is a major step in that direction, a solution *far* superior
to HTTP over SCTP or TCP alone.

Reducing HTTP latency with SPDY

Posted Nov 20, 2009 9:17 UTC (Fri) by Kamilion (subscriber, #42576) [Link]

Is it just me or does this sound a lot like what you would end up with if I asked you to describe the concept "HTTP Over SSH" in detail?

This sounds like a job for

subsystem /usr/bin/sshttpd

Reducing HTTP latency with SPDY

Posted Nov 20, 2009 12:53 UTC (Fri) by butlerm (subscriber, #13312) [Link]

Yes. Although SPDY does other things like header compression that would have
to be done above the SSH level. And SSH would have to be adapted to do
TLS/SSL style certificate validation. TLS over SSH would be rather
redundant, encryption wise.

Reducing HTTP latency with SPDY

Posted Nov 22, 2009 12:00 UTC (Sun) by Kamilion (subscriber, #42576) [Link]

SSH2 supports zlib stream compression.

Honestly, I think I prefer SSH's key exchange over TLS.
But then again, they protect different sorts of things.

RFC 4255 sort of makes that moot though, by publishing the server's public key fingerprint hash in the DNS record. Still, if you force a handshake with an unknown domain, DNS is easy to poison as well.

Would SPDY be better off as being considered a new type of proxy protocol to something like squid that could maintain and cache data instead of a server access protocol?

Also, most people tend to forget that encryption isn't mandatory for SSH data channels.

There's no reason why you couldn't pass HTTP/1.1 as is over a SSH channel to a named subsystem.

And sshttpd is as good a name as any, I suppose.

Though, I must ask...

I am one of those crazy bastards with hundreds of open tabs across multiple monitors.

Right now, most of them lay dormant, eating only my own resources.

How can I justify maintaining an open channel to each of these hundreds of sites and servers when I can only realistically balance about six in view?
(TooManyTabs for Firefox helps a bit...)

How many more out there are there like me that care?
How many more that won't care how many resources they abuse?

It's sort of rhetorical and doesn't really require a response, however, I am still curious as to other's viewpoints and opinions.

Reducing HTTP latency with SPDY

Posted Nov 27, 2009 1:36 UTC (Fri) by efexis (guest, #26355) [Link]

I think maybe many of those many tabs you have open may be facing sites that don't need to keep the connection to you open after the data is transmitted and so will just get closed... in the mean time, if one of those pages has a long URL you've used to access it (which includes form GETs), that URL gets sent to each subcomponent request of the page that needs fetching (images, iframes, scripts) as the HTTP REFERER... as does any set cookie string. While a protocol that allows you to send those only once may not be a big win for you, for a server that's dealing with tens/hundreds of thousands of requests a minute, that's a massive saving, as it is for the series of tubes connecting that server to you.

For sites that do want to keep the connection open there could be additional savings for the server. Imagine the normal process of doing this (for instant message delivery etc) involves you opening a connection to the remote web server, which parses your request, and then fires up an instance of perl or php etc to handle it. To hold the connection open, that instance of perl/php will stay running, holding the connection open, and printing to it should it need. With a multiplexing connection this needn't be true; a minimal connection handling server can stay running, holding details of the connection (inter-connection persistent headers etc) in its memory, but allowing the script process to unload until next needed again, if it's needed again. The other option is that other request handlers may be able to open a channel to you through the held open connection and send data to you. Without this (assuming an instant-messenging model) both the sender and receiver of the message would each have their own instances of the script processor running, with the process handling the senders request connecting to the process handling the receivers request, and sending it the message which it would then bounce on to you.

So yeah don't worry about it, this should all mean you can keep all your tabs open, and now with up to 50% less guilt! :-)

Server side impact of SPDY

Posted Nov 20, 2009 12:16 UTC (Fri) by zmi (guest, #4829) [Link]

What I'm missing from the article is the server side view. Will it improve server throughput as well? If you have a server that manages 1000 requests/s now, will that be more, less, or equal with SPDY?

Will CPU usage of the server increase dramatically because of compression?

Also, for efficient push methods, the client would have to fill it's cache with server push. That would be a great benefit for sites with many pictures: Initially the first page with all pix is sent, and while the browser displays that page and the user searches for the next link to click, the server could push content in the background. So when the users clicks on the next page, the browser already has most pictures (or other content) and just needs the HTTP page data, resulting in much better browser experience. But there would need to be a server-wide control over that feature, because 200 simultaneous "background pushes" could grind the server to a halt. Could be really nice if used correctly, though.

Server side impact of SPDY

Posted Nov 26, 2009 8:21 UTC (Thu) by jengelh (subscriber, #33263) [Link]

I see what you did there. X-Associated-Content being used as a server push for ads. No thanks.

Server side impact of SPDY

Posted Nov 26, 2009 8:58 UTC (Thu) by zmi (guest, #4829) [Link]

Hm, I didn't think about ads. That could really be a problem. I use Adblock Plus and NoScript plugins in Firefox, so I don't see a lot of these, and I'm sure they could also help to filter unwanted push.

So the protocol should allow the client to filter the push list *before* it's actually sent. A simple handshake would help:

server: I want to send x.html, ad.html, pic.jpg, ad.swf
client: Just send x.html, pic.jpg

And so the filter works before the push. After all, we want things faster, so filtering things the client won't see anyway before it's pushed seems necessary.


Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds