|
|
Subscribe / Log in / New account

Google releases Neatx NX server

LWN.net needs you!

Without subscribers, LWN would simply not exist. Please consider signing up for a subscription and helping to keep LWN publishing

July 24, 2009

This article was contributed by Koen Vervloesem

On July 7, internet search giant Google not only announced its operating system Google Chrome OS with much fanfare, it also quietly released Neatx, an open source NX server. According to the announcement, Google has been looking at remote desktop technologies for quite a while. While the X Window System has issues with network latency and bandwidth, the NX protocol compresses X requests and reduces round-trips, resulting in much better performance — to the point that it can be used over network connections with low bandwidth.

So with Neatx, users can log in to a remote Linux desktop. Moreover, the session can be suspended and resumed later from another computer, resembling the functionality that GNU screen offers for console sessions. But, unlike screen, a Neatx user has access to the GUI of the remote machine, just as if they were sitting in front of it.

The NX protocol, using SSH as a transport and for authentication, was developed by the Italian company NoMachine, which released the source code of the core NX technology in 2003 under the GPL. NoMachine offers free (as in beer) client and server software for various operating systems, including Linux. It wasn't very long before free-as-in-speech NX clients emerged, then, in 2004, Fabian Franz implemented FreeNX, a GPL implementation of an NX server.

FreeNX development stalls

However, after a number of years the FreeNX project is facing some serious problems. Franz hasn't responded to e-mails on the developer mailing list for a long time and he seems to be the only one able to check code into the repository. As a consequence, the development has stalled for some time. That brought Florian Schmidt to ask about the future:

I think the whole freenx project should decide if they still like to wait for Fabian or if they want to start the project on a new space with some more admins and decide a development core team and project space maintainers.

Because upstream FreeNX development has stalled, downstream packagers have essentially picked up the development. There is a FreeNX team that maintains Debian and Ubuntu packages. These maintainers push appropriate patches to their branch and thus have the most up-to-date repository, with some extra features the official FreeNX server doesn't have, such as shadowing local X sessions and stubs for guest sessions. Marcelo Boveto Shima, one of the maintainers, noted FreeNX problems in a post to the FreeNX mailing list: "Working on FreeNX is a dead-end and it is becoming too hackish." He decided to write his own FreeNX server, TaciX. In the meantime, the Debian/Ubuntu repository has become the "upstream" for Gentoo's FreeNX package.

A new NX server from scratch

Shima wasn't the only one disappointed in FreeNX development. According to Google the server was "written in a mix of several thousand lines of BASH, Expect and C, making FreeNX difficult to maintain." That's why some developers at Google designed Neatx, a new implementation, based on NoMachine's open source NX libraries:

Designed from scratch with flexibility and maintainability in mind, Neatx minimizes the number of involved processes and all code is split into several libraries. It is written in Python, with the exception of very few wrapper scripts in BASH and one program written in C for performance reasons. Neatx was also able to reuse some code from another Google Open Source project, Ganeti. The code still has some issues, but we're confident interested developers will be able to fix them.

Google implemented Neatx because the company operates a large number of virtualized workstations in clusters [PDF], running on its cluster virtual server management software tool, Ganeti. To be able to log in to the virtual workstation from home or via a wireless connection and work smoothly, X or VNC can't be used. That led Google to turn to the NX protocol. An added bonus is that the protocol allows restoring a session opened at the office from home and vice versa. In the release announcement, the developers noted that Neatx implements some features not found in FreeNX, but also that it lacks some other features that FreeNX has.

Neatx in action

Your author tried both QtNX and NoMachine's NX client to connect to FreeNX 0.7.3 and Neatx on Ubuntu 9.04. Because Neatx has not yet released an official version, your author checked out the latest source code and built it. It turned out QtNX can't connect to Neatx because of a version mismatch, and the Neatx developers seem to test their server software with NoMachine's NX client, so that is the only supported client for now.

[Neatx]

Session creation, suspension, resumption, and shutdown all work well in Neatx. Users can choose between Gnome, KDE, Application, and Console sessions, and they can run their session on a virtual desktop or as a floating window. They are also able to set the keyboard preferences, the resolution, and choose full-screen mode. Neatx supports session shadowing, the ability for multiple users to view and collaborate within the same NX session. For the moment that only works with sessions belonging to one user, so it's not that usable yet. Sharing of the X clipboard also works flawlessly.

A couple of things don't work yet. For example, terminating an open session from the session list isn't possible. The user first has to resume the session and then terminate it. Tunneling of sound, printers, and Samba are also not yet implemented. And Neatx doesn't support RDP (the remote desktop protocol for Windows) or VNC sessions, something that FreeNX does support. There are also still some loose ends because the code is still alpha. However, the Neatx Google Group is pretty active and already has some interesting suggestions for further developments, such as a jailed NX, enabling users to NX into a server while not being able to see any other user's data, and printer tunneling.

Although the simultaneous announcements of Google Chrome OS and Neatx seem to be pure coincidence, they both are based on the concept of a thin client. Chrome OS is a perfect operating system for the casual user with a netbook connected to internet, running most of the applications in a web browser. For applications that don't run inside the browser, a Neatx server on Google's or someone else's servers can offer a desktop "in the cloud" which can be accessed from everywhere. Google's own use of Neatx for virtual workstations shows that the thin client concept is reviving. Hopefully it will also revive developer's interest in contributing to a free NX server, which is an essential component for this development.


Index entries for this article
GuestArticlesVervloesem, Koen


(Log in to post comments)

Google releases Neatx NX server

Posted Jul 24, 2009 22:06 UTC (Fri) by ejr (subscriber, #51652) [Link]

Kudos to Google for working *with* an existing protocol! Looking forward to trying this package's version of nxproxy. I use it to forward R and Octave visualizations from cluster back-ends...

Google releases Neatx NX server

Posted Jul 27, 2009 16:44 UTC (Mon) by iabervon (subscriber, #722) [Link]

Google actually does a very good job of using existing protocols, at least for stuff that's not internal to the company. E.g., Google Talk is XMPP with a few extensions using XMPP's standard extension mechanism; their Wave protocol is also an XMPP extension (with the interesting aspect that it implies XMPP servers talking to each other without necessarily having any XMPP clients at the ends).

Google releases Neatx NX server

Posted Jul 24, 2009 22:30 UTC (Fri) by MattPerry (guest, #46341) [Link]

If the X Windows protocol has network latency and bandwidth issues, then why don't we fix those issues rather than wrapping them in another protocol? It seems that the NX protocol treats the symptoms without addressing the underlying problem.

Google releases Neatx NX server

Posted Jul 24, 2009 22:38 UTC (Fri) by dlang (guest, #313) [Link]

the biggest problem with X is that it frequently takes a _lot_ of round trip messages to do standard things. each application that starts needs to do these same things, and each different display that you connect to may give you different answers

on signficant thing that NX does is to provide a local server to act as a proxy for these sorts of things, if it knows the answers already it provides them without having to actually go and ask the display.

'fixing the X protocol' to do the same thing would end up looking very similar, a local daemon that local applications think is their display, that then remembers the answers from prior requests.

you can't eliminate these calls without eliminating backwards compatibility, and so far nobody has been willing to do that.

Google releases Neatx NX server

Posted Jul 26, 2009 4:10 UTC (Sun) by sbergman27 (guest, #10767) [Link]

"""
the biggest problem with X is that it frequently takes a _lot_ of round trip messages to do standard things.
"""

True. Watching an X session over a dial up modem (external, with tx/rx lights) is quite interesting. Not only are there a lot of round trips, but it appears that everything happens in serial fashion with only one request or response in the pipe at any given time. In the modem's LEDs you can clearly see "request, response, request, response, request, response...".

Many people think that X needs bandwidth. It actually isn't such a bandwidth hog. Others correctly point out the round trip issue. But rarely have I heard anyone comment upon the serialized nature of the protocol. And that looks like the real performance killer to me. At least over any sort of WAN. At Ethernet latencies it's likely not a problem at all.

Google releases Neatx NX server

Posted Jul 26, 2009 9:28 UTC (Sun) by njs (guest, #40338) [Link]

When people complain about X and round trips, it's the serialization issue that they're talking about. No-one cares if there's *one* round trip between a user action and a visible response (or if they do, there's nothing that could be done about it anyway), it's the fact that you have to wait for one response before you can make the next request that turns round-trip latency into your bottleneck.

That's why relatively crude protocols like VNC can completely outclass X -- sure, now you're stuffing giant blocks of pixels down the network pipe and taking way more bandwidth, but those giant blocks of pixels have no dependencies -- so instead of waiting around all the time, you can just saturate the pipe. (rsync uses a similar strategy; see also pipelined SMTP, IMAP, HTTP, ...)

Google releases Neatx NX server

Posted Jul 26, 2009 17:38 UTC (Sun) by elanthis (guest, #6227) [Link]

It's my understanding that X itself doesn't require a ton round trips but that xlib (the X library) requires round trips. That was one of the main reasons why XCB was written, and why toolkit authors are encouraged to drop xlib and use XCB for the X backends.

Google releases Neatx NX server

Posted Jul 26, 2009 18:58 UTC (Sun) by sbergman27 (guest, #10767) [Link]

That bit about xlib and the libraries built upon it reminds me of something that my Dad told me once about how he and my mother decorated the rooms in our family's lake cabin back in 1957. He said that they'd take the first thing they put in... say some curtains... and then for the next thing they'd match the color to that. And then for the next thing, they'd match the colors of that and/or the first thing. And by the time they finished they had a whole room whose decorative scheme was completely based upon one insigificant thing which they didn't really like all that much anyway.

Google releases Neatx NX server

Posted Jul 27, 2009 9:34 UTC (Mon) by PO8 (guest, #41661) [Link]

It isn't just Xlib. X apps as well as Gnome/Gtk and KDE/Qt make a lot of gratuitous round trips also. XCB does make it easier to hide the latency of those round trips, but somebody still has to do the work to implement this. Keith Packard showed a few years ago that enough latency could be removed to make X work over reasonably high-latency links without doing anything particularly special to the X protocol. However, AFAIK no one ever aggressively went after this; there seems to be just a limited amount of energy to care about high-latency links in 2009. A proxying solution like NX can perhaps help to tackle the problem with less effort than optimizing the X client side, serving as a stopgap until everybody's latency is so low that no one cares anymore.

Very bad idea

Posted Jul 27, 2009 12:31 UTC (Mon) by khim (subscriber, #9252) [Link]

A proxying solution like NX can perhaps help to tackle the problem with less effort than optimizing the X client side, serving as a stopgap until everybody's latency is so low that no one cares anymore.

Latency will never go away. That's just fact of life. Speed of light is 300'000km/sec and Earth 40'000km thus minimum possible worst-case latency is 130ms. In reality it can be somewhat reduced by using few datacenters (like Google does), but it can not be reduced to make remote usage of X server "in the cloud" possible...

Very bad idea

Posted Jul 27, 2009 23:16 UTC (Mon) by marcH (subscriber, #57642) [Link]

> Speed of light is 300'000km/sec...

... in vacuum. In fiber or electrical cables, it is typical 200,000. That's 5 milliseconds per 1000km. Multiply by two to get the round trip time, and multiply once again by (typically) two to account for buffering in processing in active network nodes.

Very bad idea

Posted Jul 31, 2009 23:24 UTC (Fri) by sbergman27 (guest, #10767) [Link]

Why run a 40,000km cable when you could just run a 3 foot cable the opposite direction?

Very bad idea

Posted Aug 1, 2009 9:59 UTC (Sat) by modernjazz (guest, #4185) [Link]

Well, those of us who live on Neptune don't think that solution will work
for us.

Sorry, bad math

Posted Aug 2, 2009 4:40 UTC (Sun) by khim (subscriber, #9252) [Link]

Why run a 40,000km cable when you could just run a 3 foot cable the opposite direction?

If you'll use 3 feet cable you'll be 20'000km from destination. You get 40'000km when two points are antipodes: it does not matter which way you'll go it's 20'000km one way and then 20'000km another way - direction is irrelevant. The only way to reduce distance is to burrow a hole and the depeest hole known to a man (less then 15km) does not shrink the distance all that much...

Sorry, bad math

Posted Aug 6, 2009 16:32 UTC (Thu) by phanser (subscriber, #60087) [Link]

40000 km is earth circumference,
see http://en.wikipedia.org/wiki/Earth

Exactly!

Posted Aug 9, 2009 12:12 UTC (Sun) by khim (subscriber, #9252) [Link]

The device capable of burrowing straigh through Earth? I know few guys who'll pay BIG BUCKS for such thing - where are selleing it?

And if you are not selling then the only way to go to the antipode is along the Earth surface - so yes, 40'000km (give or take) is shortest path available...

Exactly!

Posted Aug 13, 2009 19:46 UTC (Thu) by hummassa (guest, #307) [Link]

Come, on, you are being thick. 40000km is the CIRCUMFERENCE of the earth, ie, the distance you walk if you wanna make a FULL circunavigation.

The distance between antipodes is half that - 20000km ALONG THE EARTH SURFACE. It would be ~12000km thru the center of the Earth.

Exactly!

Posted Aug 13, 2009 19:58 UTC (Thu) by johill (subscriber, #25196) [Link]

And if your two machines are at antipodes, then your latency is the round trip, which will be ~40Mm.

Google releases Neatx NX server

Posted Jul 29, 2009 8:54 UTC (Wed) by mjthayer (guest, #39183) [Link]

Since Xlib clients can switch to XCB painlessly using xlib-xcb (was that the name?), it should be pretty simple to optimise Qt and Gtk+ using XCB functionality where the round trip time is a problem. The fact that no one has done this suggests to my mind that it is not a problem for many people.

Google releases Neatx NX server

Posted Jul 29, 2009 17:59 UTC (Wed) by dlang (guest, #313) [Link]

it's not a problem for the vast majority of people.

the vast majority of people only use X on a local machine and never touch the network

this is especially true for developers.

it's also true that most people have no problem with the bloat of current desktop software because they run on recent machines that are fast and have lots of ram.

in both cases this doesn't mean that there isn't a problem, and that fixing the problem wouldn't improve things for everyone with drastic improvements for some users (possibly drastic enough to open an entire new category of use), bit just means that unless it's pointed out to people and they are encouraged to _try_ the use-cases that have problems they will never notice them.

personally I suspect that the linux desktop bloat had some impact on the weak showing of linux on netbooks. most linux distros and desktop environments really want more resources than a netbook has. Linux can run fine on that sort of hardware (I know, I used much lesser hardware for many years), but the current crop of desktop environments really don't care about resource use.

Google releases Neatx NX server

Posted Jul 29, 2009 19:38 UTC (Wed) by mjthayer (guest, #39183) [Link]

The idea is that tools like powertop and latencytop spur people into action over this sort of thing. So what is the tool for measuring the sore spots in an applications use of Xlib, so that those parts can be replaced with direct use of XCB?

Google releases Neatx NX server

Posted Jul 29, 2009 20:46 UTC (Wed) by dlang (guest, #313) [Link]

the easy thing here would be to have a developer use X over a network and introduce artificial latancy to the connection (which I believe can be done via iptables commands)

if you add a fraction of a second latency you shouldn't notice it for most things, but if you see points in your code start taking significantly more time they are dong many serialized round trips.

Google releases Neatx NX server

Posted Jul 24, 2009 23:20 UTC (Fri) by ejr (subscriber, #51652) [Link]

Having an implementation of NX from a responsive group is a step in seriously evaluating the protocol. I wouldn't be surprised if it were pulled into X (as a module) someday if people step up, push it, and shepherd the protocol and implementation. The NoMachine folks had no reason to do so; that wasn't their focus at all.

However, to many the critters have left the barn on X over the network. DRM. Fonts. There's enough pain to make people hesitate.

Google releases Neatx NX server

Posted Jul 25, 2009 12:36 UTC (Sat) by nix (subscriber, #2304) [Link]

What? 3D: works over the network. Fonts: put them on the client, rather
than the server (or both, if you want core fonts to work too, but that's
getting quite unimportant these days).

Google releases Neatx NX server

Posted Jul 26, 2009 13:31 UTC (Sun) by sbergman27 (guest, #10767) [Link]

"""
Fonts: put them on the client, rather than the server (or both, if you want core fonts to work too,
"""

Can't you just:

Xorg -query myserver.localdomain -fs tcp/myserver.localdomain:7100

and make sure xfs is running on myserver (and listening on tcp:7100) and have all the server's installed fonts available everywhere? One shouldn't have to upgrade a thin client once it is in place. Only the server.

Google releases Neatx NX server

Posted Aug 11, 2009 15:41 UTC (Tue) by Lurchi (guest, #38509) [Link]

You are confusing Server and Client.
Terminal Server == X client
Terminal (Thin) Client == X display server

So you install the fonts on the terminal server (once) and the program
will render the same on every client.

Google releases Neatx NX server

Posted Jul 27, 2009 15:31 UTC (Mon) by ejr (subscriber, #51652) [Link]

3D works, unless the app checks for direct rendering and dies otherwise. Most I've tried do that. The bandwidth requirements for video on the side of a rotating cube are kinda a problem still. And currently GLX ties you to Xlib and all its unhideable round-trips. IIRC, there's a Google SoC project on that aspect, so there's hope. But 3D also is a pain for virtualization, so I'm holding out hope that the issues will resolve themselves with time.

The font problem isn't technical, it's licensing. If you want more than bitmaps in the US, you need the appropriate license for the font. Elsewhere, the font needs licensed regardless of use pre- or post-rendering. There are good, free fonts now, but I'm sure you know apps that aren't agreeable to using anything but their pre-defined, proprietary fonts. Again, these issues could be managed, but they're a pain. Just configuring font matching is enough of a pain, anyways (not a dig, it's a difficult problem).

A VNC-like proxy sidesteps most of these. An NX proxy, well, it still runs into problems with heavy graphics (scientific visualizations, how I use it), but it does make network use feasible again.

Google releases Neatx NX server

Posted Jul 25, 2009 3:31 UTC (Sat) by dkite (guest, #4577) [Link]

xorg forked in 2004 due to the unresponsive project. Nx was developed
before that, looks like 2003ish.

It may be that fixing the protocol within the X community wasn't possible.
After all it wasn't the only thing needing fixing, and xorg was forked to
make the necessary reworking possible.

Nx is very quick and responsive. It did a very nice job with resolutions
making my netbook work very well over a slow link to my desktop machine.

I wonder if google is going to offer a virtual desktop service for their
netbook OS? I was poking around setting such a thing up for myself but ran
into the not finished parts of the free nx server.

Derek

What's the point?

Posted Jul 25, 2009 5:03 UTC (Sat) by khim (subscriber, #9252) [Link]

I mean: what can they offer to Chrome OS users? Typical Linux desktop? 2007 showed that users are not interested in that. It looks like typical internal projects: engineers are familiar with Linux and I guess use Linux in Google a lot, so it makes sanse to offer them virtualized Linux with Neatx. As for Chrome OS... this is OS for "mere mortals", right? They have no use for that stuff - uless it's Windows-based virtual computer. And the last thing Google wants is to promore Windows further.

This being said I'm pretty sure someone will provide such service. But for that they'll need some free client ported to NaCl and right now Neatx only works with proprietary client... oops?

The reverse

Posted Jul 25, 2009 8:15 UTC (Sat) by man_ls (guest, #15091) [Link]

What about remote support for Chrome OS? Manual intervention does not look like a Google thing, but for a price -- who knows.

Why will you need manual intervention?

Posted Jul 25, 2009 9:00 UTC (Sat) by khim (subscriber, #9252) [Link]

Manual intervention does not look like a Google thing, but for a price -- who knows.

Hmm... Why do you need the NX for that? You can throw away the whole thing and your data will still be available in the cloud. All data on the netbook itself are just a cache for the data in the cloud. Sure if your last two weeks spent in the wilderness are extemely valuable - you'll be able to find someone who'll replain the Chrome OS for a price, but for most users... just click "reinstall" button - local data will be wiped and clean, updated version of OS will be available in a few minutes...

And if your OS is broken beyond the ability to synchronize... the chances are high it's broken beyond the ability to work with NX too...

What's the point?

Posted Jul 26, 2009 14:57 UTC (Sun) by dkite (guest, #4577) [Link]

My experience with their browser based offerings has been of awe that
someone could even make it work at all, but running into problems that
make it impractical to use.

I've used the spreadsheet a bit. I have a couple that are very simple and
work fine, and one that is multipage and a bit complex. It doesn't.

They were trying to use existing installed base browsers to expand their
application suite. The server based idea works well for some things, but
the browser is a poor ui for complicated apps.

What if you opened google chrome and got a remote X spreadsheet running
somewhere.

Derek

What's the point?

Posted Jul 27, 2009 18:42 UTC (Mon) by jzbiciak (guest, #5246) [Link]

Android runs on a Linux kernel, for one thing. Could it be that Google is building a new userland out of existing Linux pieces?

I suspect offering GNOME or KDE aren't the targets. Offering something built around some of the same core infrastructure (X, Linux) that GNOME and KDE built on seems entirely reasonable and likely.

Bad and complex architecture

Posted Jul 25, 2009 8:37 UTC (Sat) by astrand (guest, #4908) [Link]

NX is able to achieve impressive performance in many cases. But still, I would say that the architecture is very bad. NX, just like X11, is based on running an Xserver on the client. An Xserver is a very complicated piece of software: It doesn't really belong in a thin client software at all. For example, to provide fonts for legacy X11 applications, you must have the fonts available on the client side. In a typical setup where NX Agent is used, the whole stack gets very complicated. See http://www.gnome.org/~markmc/a-look-at-nomachine-nx.html. And although NX works fine with well behaved applications, some applications works very bad.

VNC, on the other hand, is a much more cleaner solution, where you have only one Xserver (Xvnc), running on the server side. The protocol is truly thin. The performance is not dependent on well behaved applications. Since the protocol is so minimal, implementing clients is easy. And with the recent TurboVNC/TigerVNC developments, you can achieve amazing performance which allows usage of 3D-heavy or video applications.

Bad and complex architecture

Posted Jul 25, 2009 12:21 UTC (Sat) by sbergman27 (guest, #10767) [Link]

"""
And with the recent TurboVNC/TigerVNC developments, you can achieve amazing performance
"""

Better than it used to be, maybe. But not "amazing" enough. Our desktops at branch offices are delivered by FreeNX. It performs quite well. I just did a mini-evaluation of TigerVNC based on your recommendation, and there is no way my users would find the performance acceptable for their daily use. Even the best VNC clients/servers today are too laggy for that.

NX *is* more complicated than I would like, which is why I jumped to try Tiger after reading your post. But Tiger is obviously not the solution.

Bad and complex architecture

Posted Jul 25, 2009 12:55 UTC (Sat) by nix (subscriber, #2304) [Link]

Quite so. VNC *is* dependent on well-behaved applications. Apps that
repeatedly repaint hunks of the screen with the same GUI elements for no
good reason are very slow/bandwidth-hungry with VNC, because it's too
simpleminded to be able to tell that this is a no-op. (Some Java apps seem
to do this quite a lot, e.g. the abomination which is Oracle Forms.)

Bad and complex architecture

Posted Jul 25, 2009 16:52 UTC (Sat) by lab (guest, #51153) [Link]

Yes, I can confirm this, and it's the same problem with RDP (MS remote desktop). Most things work fine, but it seems that applications implementing their own GUI drawing routines, rather than using native widgets, causes any remote desktop implementation relying on bitmap transfers to come to a grinding halt. A classic example is Java based Swing, and as mentioned modern version of Oracle Forms, which is rendered as Java applets, probably using Swing libraries.

How does technology like pure X or NX fare with these type of apps?

Bad and complex architecture

Posted Jul 25, 2009 17:12 UTC (Sat) by astrand (guest, #4908) [Link]

Actually, I claim the the opposite is true: VNC can and does deal with bad applications good. This means that Swing-based Java applications works much better with VNC than with NX. Every time I've tested NX, I tried a Java based application and confirmed the bad performance.

From a technical viewpoint, Xvnc does (by default) pixel-by-pixel comparisons on the framebuffer. So unless you have explicitly disabled this (-CompareFB=0), or have some strange client, I really don't understand why you are experiencing this behaviour.

It's better to have half a cake then no cake at all

Posted Jul 27, 2009 6:02 UTC (Mon) by khim (subscriber, #9252) [Link]

Every time I've tested NX, I tried a Java based application and confirmed the bad performance.

Don't use such applications then. Sure, badly-written applications are unusable with NX - and most Java-based applications are pigs in regard to any and all resources. But with VNC all applications are equally unusable!

Bad and complex architecture

Posted Jul 25, 2009 17:20 UTC (Sat) by astrand (guest, #4908) [Link]

There are certainly cases where NX works better than VNC: Say, with well-behaved applications on very slow connections. But there are also many cases when VNC outperforms NX: I don't think NX can at all achieve, say, the same framerate for OpenGL applications on a fast network. TurboVNC/TigerVNC (or the Sun or ThinLinc versions of it) wins here.

For many (or even "most" in my context) customers/cases, the VNC bandwidth requirement is no problem. I can say this after delivering VNC based solutions to customers for the last 7 seven years.

What kind of bandwidth do you have?

Bad and complex architecture

Posted Jul 25, 2009 18:23 UTC (Sat) by sbergman27 (guest, #10767) [Link]

"""
There are certainly cases where NX works better than VNC: Say, with well-behaved applications on very slow connections.
"""

Oh, come on. The only instance in which VNC *might* outperform NX is on a LAN. And my bet would still be on NX there. We used RealVNC and then TightVNC over T1 connections for some months and we found them to be quite unacceptable for performance, and even worse (far worse) for display quality. This is for general business desktop use. Web, email, OO.o, lots of PDF viewing, often parts manuals with lots of scanned images. Some IE under Wine (unfortunately). We throw a lot at our remote displays and NX shines where VNC chokes. The mini-eval I just ran on TigerVNC reported 2047kb. It was better than when we used VNC several years ago. The video quality was good, as it immediately selected a sufficient color depth. But the performance was markedly inferior to a side by side NX session.

Regarding well vs poorly behaved apps... if there is an app that chokes NX but not VNC, I have yet to see it. Tiger/Turbo may, indeed, do well on 3D over a LAN. But my business desktop clients have generally opted against providing Doom 3 to the thin clients. Out of curiosity, what customers do you have who are interested in FPS, and what are they doing?

Bad and complex architecture

Posted Jul 26, 2009 18:01 UTC (Sun) by astrand (guest, #4908) [Link]

There are a lot of things which can affect performance. TightVNC, for example, still ships a very old VNC 3.X based version. Its client is very slow especially on Windows.

When you say "2047kb", are you referring to the "kbit/s" measurement available in the client? This is not a good measurement of how much bandwidth VNC "require".

""
Out of curiosity, what customers do you have who are interested in FPS, and what are they doing?
""

Video playback (Youtube, mplayer), custom OpenGL applications for CAD/CAM/visualisation, Catia etc.

Bad and complex architecture

Posted Jul 27, 2009 14:22 UTC (Mon) by sbergman27 (guest, #10767) [Link]

"""
When you say "2047kb", are you referring to the "kbit/s" measurement available in the client? This is not a good measurement of how much bandwidth VNC "require".
"""

I'm saying that 2047kb was the bandwidth that TigerVNC reported that it had to work with over that connection, which should presumably be the same that NX had to work with over that same connection. And for business desktop use, VNC was pretty clunky and unresponsive compared to NX. As we have no legitimate business need for video on our business desktops, we don't allow it. So I don't have a base of experience as to how well or poorly NX performs relative to *vnc for that use.

BTW, I tried TigerVNC rather than TurboVNC specifically to avoid the excuse that VNC sever/client Q still uses the old 3.X protocols. Tiger uses V4.

Bad and complex architecture

Posted Jul 25, 2009 19:29 UTC (Sat) by nix (subscriber, #2304) [Link]

High-latency connections. Even ADSL has high enough latency that raw X or
VNC are seriously painful. Modem links are utterly intolerable.

(Mind you, I've never tried NX on either of these.)

Bad and complex architecture

Posted Jul 25, 2009 19:54 UTC (Sat) by sbergman27 (guest, #10767) [Link]

"""
Even ADSL has high enough latency that raw X or
VNC are seriously painful. Modem links are utterly intolerable.
"""

Raw X actually works remarkably well over a direct modem to modem ppp connection. There the hardware compression really helps the bandwidth, and latency is nearly instantaneous. (Ping time may be 100ms, but that's just the time it takes to actually transmit 64 bytes back and forth. The real latency is almost nonexistent.) Over a modem connection through an ISP and over the internet, both VNC and X are intollerable. Raw X being worse than VNC. X's main problem is, of course, latency and not bandwidth.

I have used NX over 56k modem connections (typically 45kbps) for full desktop, fullscreen sessions, and performance is remarkably good. I sure wouldn't want to use it all day. But it's pretty serviceable. Framebuffer strategies simply cannot beat the combination of aggressive, context-aware compression and aggressive caching. To get good performace, you have to be operating at the X protocol level.

NX memory usage is also significantly better. This makes a difference if you run a lot of desktops. My largest server (server in this context meaning the machine running the actual apps) runs about 65 simultaneous Gnome sessions.

Bad and complex architecture

Posted Jul 25, 2009 22:42 UTC (Sat) by nix (subscriber, #2304) [Link]

As you suspected, I was indeed going via an ISP. Direct modem-to-modem is
probably much better.

Bad and complex architecture

Posted Jul 26, 2009 18:11 UTC (Sun) by astrand (guest, #4908) [Link]

""
Framebuffer strategies simply cannot beat the combination of aggressive, context-aware compression and aggressive caching. To get good performace, you have to be operating at the X protocol level.
""

Xvnc can basically operate on the X level: The Xvnc server is aware of which drawing operation is performed. For example, it can instruct the client to move an existing rectangle, rather than repaint/resend it.

Still, there are many more techniques (including caching) that could be added to VNC to enhance performance even further. Prototypes are already available.

""
NX memory usage is also significantly better. This makes a difference if you run a lot of desktops. My largest server (server in this context meaning the machine running the actual apps) runs about 65 simultaneous Gnome sessions.
""

Compared to what? As far as I know, GNOME uses the same amount of memory regardless of if the Xserver is Xvnc or the NX one. It is really GNOME and its applications that consumes the memory; not the Xserver. Not with Xvnc, at least.

Bad and complex architecture

Posted Jul 27, 2009 17:57 UTC (Mon) by sbergman27 (guest, #10767) [Link]

"""
Compared to what? As far as I know, GNOME uses the same amount of memory regardless of if the Xserver is Xvnc or the NX one. It is really GNOME and its applications that consumes the memory; not the Xserver. Not with Xvnc, at least.
"""

Compared to the RealVNC, which is what I have actual experience with in an environment in which they could be directly compared. And I think you might be surprised, in an environment with many desktop sessions running, just how low the requirements of the actual desktop are when shared memory savings are figured in. At the time that I ran the comparison, I believe we were running about 74MB per desktop user. (4096MB ram for 55 users. And yes, significant swap was used, although the actual swapping overhead was relatively low.)

I don't remember the actual numbers, but the nxagent and Xvnc processes tended to be at the top of the 'top' output when sorted by memory use. And the (res - shared) values were consistently better for nxagent than for Xvnc. NX is doing something more efficiently. I'm not sure what.

Bad and complex architecture

Posted Jul 27, 2009 19:37 UTC (Mon) by astrand (guest, #4908) [Link]

""
I don't remember the actual numbers, but the nxagent and Xvnc processes tended to be at the top of the 'top' output when sorted by memory use. And the (res - shared) values were consistently better for nxagent than for Xvnc. NX is doing something more efficiently. I'm not sure what.
""

Perhaps this is due to the fact that NX has multiple process (NX agent and NX proxy)? You'll need to count both for a valid comparison.

In any case, the Xvnc memory usage is no deal. On my Fedora 11 system, the local Xorg now consumes 63 MiB (resident), while Xvnc only consumes 34 MiB.

Bad and complex architecture

Posted Jul 30, 2009 4:34 UTC (Thu) by sbergman27 (guest, #10767) [Link]

No nxproxy. Just nxagent and nxnode, with nxnode using less than a meg of res - shared.

For X, most of that "resident" is resident *video ram*, often mapped multiple times, and not system RAM. X servers always look like they are using a lot more system than they really are.

At any rate, when we switched from VNC to NX, I recall that the reduction in memory use was clearly discernable in the systat reports. Note that the 34MB for the Xvnc process represents 3.4GB with a hundred desktops running. When people tell me that program Q "only" consumes 34MB, I automatically multiply by 70 to see what the *real* consumption would be if my users were running it on my most heavily loaded server. It makes a difference.

Bad and complex architecture

Posted Jul 30, 2009 7:35 UTC (Thu) by astrand (guest, #4908) [Link]

""
No nxproxy. Just nxagent and nxnode, with nxnode using less than a meg of res - shared.
""

As I understand it, nxnode is a shell script, so it's no surprise that it consumes less memory than a full Xserver...

Why are you not running a nxproxy?

The nxagent should be the Xserver, so that is what should be compared to Xvnc. One theory why it consumes less memory than Xvnc is that it (AFAIK) is based on a very old X implementation, X.Org 6.9 or something like that. Back when we delived Xvnc based on that old implementation it was also more lightweight, consumed less than 10 MiB or so. But of course, it also lacked many modern X extensions.

It's not the VNC part of Xvnc that consumes the RAM; it's the X server.

I agree with you that small figures can turn into large numbers when you multiple with the number of users. But I still claim that even 28 MiB (RES-SHR, right now) is not a big deal if you are running a full, modern desktop environment and mainstream applications. These typically consumes much, much more.

Bad and complex architecture

Posted Jul 30, 2009 15:11 UTC (Thu) by sbergman27 (guest, #10767) [Link]

I have pretty good numbers on what it takes to run 50-70 Gnome desktops. I went back and looked, and when we made the VNC->NX switch, we were 32 bit and running about 50 desktops on 4GB memory. So aout 82MB per user. (Surprised?)

Apparently the functions of nxproxy have been subsumed by nxagent. We're running freenx 0.7.3 with nxlibs 3.3. NX has become somewhat simpler.

Although we have no need for multimedia, and in fact actively discourage it, I did run a side by side comparison of NX and Tiger last night using this amusing and endlessly fascinating video:

http://www.elphel.com/3fhlo/samples/333_samples/m021_300_...

This is on a 1680x1050 screen. It comes up at 320x240 or so. And Tiger reports a 2177kbit connection speed. At that size, both NX and Tiger are jerky. I clearly see each frame update. However, the Tiger instance seems to update at a more even rate. The NX framerate jumps around a lot, which is annoying. I'd give VNC the edge, there. I would say that Tiger is barely usable at that size. If I maximize the totem window, or go full screen, both Tiger and NX go down the toilet. Maybe 2 frames per second. In fact, neither Tiger nor NX are usable if I increase much at all over 320x240.

Interestingly, if I let NX run it through several times, it eventually will play at any size, even full screen, with silky smoothness. Tiger can't begin to match NX's client-side caching. But of course, for this use, that's cheating.

Running this test, I spent a good bit of time on both types of remote desktop. And did some more testing of things like browsing the web and scrolling through PDFs. With NX, it's so easy to forget I'm not on the local machine, that I *have* to make sure to use a different wallpaper remotely and locally. And even then I have to stop and remind myself whether I'm remote or local. With Tiger, I can *never forget* that I'm on a laggy remote connection. My users would storm in and lynch me if I tried to saddle them with it.

What I get out of all of this is that on a 2 mbit connection with a 75ms ping time:

- NX is *far* superior for normal business desktop work. (Like night and day. There's no comparison.)
- For videos, Tiger is about the same speed (or perhaps slightly faster) and notably smoother for very small videos.
- Neither NX nor Tiger are usable at all for videos much beyond 320x240.
- FreeNX is getting simpler. And Neatx is about to take that simplification to the next level.

FWIW, I do use VNC to remotely administer the legacy windows clients.

Bad and complex architecture

Posted Jul 30, 2009 18:00 UTC (Thu) by astrand (guest, #4908) [Link]

Interesting to read your extensive tests.

""
...when we made the VNC->NX switch, we were 32 bit and running about 50 desktops on 4GB memory. So aout 82MB per user. (Surprised?)
""

Not really. There are many different use cases, and this memory usage is actually what we recommended a few years ago (we recommended about ~50 MiB per user). But since, the desktop and applications have started to grow. This is why we are now recommending 100-150 MiB instead. This is enough to cover a typical rich desktop with heavy applications such as OpenOffice, Firefox, Google Earth etc. But of course, for other users it may still be perfectly fine with less than that.

Regarding video: Does totem change the resolution to 320x240 before playing the video? If not, 1680x1050 is what's going to be used. I understand if this gets jerky on 2 Mbit. In general, video playback only works well on LANs.

We are regularly testing video with sound in 1024x768 on that works great, but on a LAN.

Bad and complex architecture

Posted Sep 14, 2009 12:21 UTC (Mon) by sushisan (guest, #60822) [Link]

My experince is totally different!!!

We have a client with 8 terminals only. For the migration we have a server with a Phenom X4 with 8Mb ram

We use freenx in the server (F11) and the nx client in the first time.

The bandwith usage is very good, even with a SLOOOOOOOOW internet conection.

But the memory usage is a disaster!!! The nxagent can't stop to raise the memory usage until start to swap the system with or without usage in the system (only with a connection active)!!!

I can't find any solution to this that do even when I switch off the cache.

Now we've migrated to Xvnc and work pretty good

Bad and complex architecture

Posted Jul 30, 2009 15:32 UTC (Thu) by sbergman27 (guest, #10767) [Link]

And I had intended to mention that NX is a complete remote solution in itself, including an esd sound server, samba-based client printer sharing, and samba-based file sharing. Whereas VNC is just video. Not that we make use of all that. But it sounds like your customers might. That narrows the complexity difference yet more.

Bad and complex architecture

Posted Jul 30, 2009 18:14 UTC (Thu) by astrand (guest, #4908) [Link]

Exactly, I'm glad that you are pointing this out. Many people just compare, say, the graphics performance and don't realize that it takes much more for a complete solution.

I'm trying not to "sell" ThinLinc too much in this forum, but since you brought this up: In ThinLinc, we are implementing access to local devices using completely separate protocols, all running on top of SSH. This allows us (just like NX) to use existing and open protocols. But unlike NX, we use the real upstream versions, instead of forks. This allows us to work close with the community.

I haven't checked what NX provides nowadays, but I'm quite sure that we actually have better support for local devices. For example, we are supporting:

* Sound (using PulseAudio, superior to ESD).
* Serial port redirection
* File access (using NFSv3/unfs3, much more lightweight than Samba)
* Local printing (no local configuration necessary)
* Smart card redirection and authentication

All of the above are supported both on Windows and Linux. We have clients for other platforms such as Solaris and Mac OS X (but not yet with full support for all types of local devices.)

Bad and complex architecture

Posted Jul 30, 2009 19:36 UTC (Thu) by sbergman27 (guest, #10767) [Link]

Out of curiosity, what are the licenses on the ThinLinc client and server?

Bad and complex architecture

Posted Jul 30, 2009 20:06 UTC (Thu) by astrand (guest, #4908) [Link]

The ThinLinc product contains many components and thus has multiple licenses. In short, the core "protocol handlers" are all open source while the control processes are proprietary, but with a free (free-as-in-beer) license for 10 users. For more information, see http://www.cendio.com/legal/.

Bad and complex architecture

Posted Jul 30, 2009 20:54 UTC (Thu) by sbergman27 (guest, #10767) [Link]

"""
I'm trying not to "sell" ThinLinc too much in this forum, but since you brought this up:
"""

Well, it was probably inevitable from the time that you, Peter Åstrand, "Chief Developer" for Cendio's ThinLinc product, a proprietary competitor to NX, started a thread on LWN.net entitled "Bad and complex architecture", trashing NX, and talking up TigerVNC more than its actual performance justifies, without really disclosing who you were until finally dropping a hint long after everyone else had moved on from this thread.

Shame on you. When we think Microsoft is doing that we call it "Astroturfing".

I have found our conversation enjoyable and enlightening, and my resulting research beneficial and educational. But I can't help feeling annoyed.

Bad and complex architecture

Posted Jul 30, 2009 21:54 UTC (Thu) by astrand (guest, #4908) [Link]

I'm sorry to hear that you feel "annoyed", it was certainly not my intent. I'm writing this in my spare time, as a private individual. Cendio is not paying my LWN subscription. I don't think that one needs to present themself before writing in this forum. Nor should it be necessary to include disclaimers of type "this is my personal opinion and not my employers".

Bad and complex architecture

Posted Jul 30, 2009 22:37 UTC (Thu) by sbergman27 (guest, #10767) [Link]

"""
I'm writing this in my spare time, as a private individual...
"""

... with a direct and substantial financial interest in discrediting the competition. And if you were to let people know about your direct and substantial financial interest up front you would have less credibility when doing it.

Who it is who pays for your LWN subscription, and whether it happens to be during business hours or not is completely beside the point.

And when such clear conflict of interest exists, it *is* customary to declare such.

disclaimers

Posted Jul 31, 2009 0:47 UTC (Fri) by xoddam (subscriber, #2322) [Link]

Well it certainly is customary to declare a pecuniary interest upfront, but I would not have thought that diminished one's credibility. Indeed it has been most enlightening on occasion when, for instance, openly-declared vmware hackers have weighed in on the subject of virtual machine internals in the context of a discussion of several competing free vm implementations.

Bad and complex architecture

Posted Jul 31, 2009 13:29 UTC (Fri) by sbergman27 (guest, #10767) [Link]

Agreed. Industry players commenting about the relative merits of their respective approaches is a good thing. Just not when it comes in the form of astroturfing. (Åstrand-turfing?)

Bad and complex architecture

Posted Jul 31, 2009 16:26 UTC (Fri) by sbergman27 (guest, #10767) [Link]

The above was supposed to be a reply to xoddam, of course.

Bad and complex architecture

Posted Aug 7, 2009 13:31 UTC (Fri) by deleteme (guest, #49633) [Link]

A little harsh perhaps? I found the discussion very interesting.

Bad and complex architecture

Posted Jul 26, 2009 8:30 UTC (Sun) by tialaramex (subscriber, #21167) [Link]

How can X possibly lose for OpenGL ? You know that OpenGL is network transparent on X right?

Years ago I watched someone play my copy of GLQuake from their machine over GLX quite comfortably. It wasn't quite as fast as a local copy of the game, but it was very playable. The VNC solution, in addition to chewing all the CPU on both machines and _still_ requiring a fast OpenGL implementation on the machine running the game, would have also chewed up all the network bandwidth, and I'm not convinced it would even have been playable.

My own OpenGL software is much less pretty than Quake, but it has practical reasons to be used over a network. Runs fine, but in VNC it would definitely incur more or less fullscreen refreshes at 60Hz. That's a LOT more bandwidth and CPU than a few thousand vertices every so often and a set of transform matrices per frame.

Maybe you have a special definition of "outperform" but for me if you needed many times more resources to get the same result, that is less rather than more performance.

Bad and complex architecture

Posted Jul 26, 2009 14:06 UTC (Sun) by mjg59 (subscriber, #23239) [Link]

Up until aiglx was merged, remote GL typically wasn't accelerated on Linux - the only real exception was the binary nvidia drivers. Things are much better now.

Bad and complex architecture

Posted Jul 26, 2009 18:18 UTC (Sun) by astrand (guest, #4908) [Link]

OpenGL over the network (remote GLX) only works good for a small number of applications: those that are sending (datawise) small commands to the graphics card, and then letting it do all hard work, transforming these commands into many pixels. But most OpenGL apps nowadays doesn't work that they: they need to transfer large amounts of data to the graphics card. If it is remote, the data needs to transferred over the network, and things will be slow. So for remote 3D solutions, it's really much better to have the graphics card in the server; in the same machine that executes the applications. This is exactly what VirtualGL is about.

If you want to use VirtualGL with VNC, however, you'll need an accelerated implementation (TurboVNC/TigerVNC/Sun Shared Visualization/ThinLinc) to get good performance.

Google releases Neatx NX server

Posted Jul 25, 2009 8:51 UTC (Sat) by njs (guest, #40338) [Link]

Neat! From a quick look, it seems they still depend on the upstream 'nxagent', which makes me sad.

For those unfamiliar with NX's innards, 'nxagent' is the somewhat opaque name that NoMachine gave to the actual X server that's been hacked to run headless, display on other X servers via the NX protocol, and reconnect to the remote X server open request -- i.e., the part that provides the core functionality. What Google's released is a rewrite of the management scripts that surround this core -- the way you connect to an NX session is that you ssh in as the magic user 'nx', and then speak a special protocol to log in again as your actual user, and then speak another special protocol to manage sessions, and then get a proxy to the actual underlying nxagent. Various programs are needed to implement these protocols, spawn and manage the actual nxagents, proxy to them, etc. The freenx implementations are rather grotty, as mentioned, and Google's look much cleaner. Really what *I* want is a way to skip them altogether, though, both aesthetic and practical grounds.

Unfortunately, nxagent itself is (last I checked) a forked version of the old XF86 monolithic tree, developed by a "occasionally throw a big pile of undocumented tarballs over the wall" method, and it *should* make it easy to get a simple persistent X session running under a single user account, there are some restrictions that make it hard. I don't remember exactly what problems I ran into anymore -- hopefully someone will correct me if I'm wrong -- but IIRC you cannot *start* a session in headless mode, only start it in attached mode and then disconnect, and if you tell it that you want to reconnect but fail to do so within 30 seconds, it will just quit and take down all your running programs. If your wireless drops out at the wrong time then sucks to be you.

(Standard disclosure/shameless plug: I was so frustrated that I wrote a competitor, 'xpra'.)

Google releases Neatx NX server

Posted Jul 25, 2009 13:24 UTC (Sat) by rvfh (guest, #31018) [Link]

This XPRA stuff is really neat! Is there any nice front-end for novice users? I guess the way it works might not be obvious for loads of people...

Ideally you'd want to initiate the server from a client, and apps from there too.

Anyway, I'll see how fast it is from work, and if it is, then xpra will become part of my list of must-install packages!

Google releases Neatx NX server

Posted Jul 26, 2009 3:26 UTC (Sun) by njs (guest, #40338) [Link]

> Is there any nice front-end for novice users?

There isn't any whizzy GUI front-end right now, no -- you just say 'xpra attach ssh:whereever' (or click on a shortcut to that), and Control-C to kill it. But the client is <500 lines of Python, so it wouldn't be hard to add more bells and whistles if someone cares to do so.

> Ideally you'd want to initiate the server from a client

Good point! I wrote the patch for this... but then realized that there's a problem: the client doesn't necessarily know where the xpra executable is installed on the server, so it can't necessarily find it to start it. You can work around this by requiring that it always be installed in a well known location, but then we're back to the heavy-weight NX-style setup (I don't have root on the server where I use xpra!). And typing 'ssh whereever xpra start' isn't really harder than typing 'xpra start ssh:whereever' anyway.

(This problem doesn't arise when connecting to an already running server, because the server does some magic at start-up time so the client can always find it.)

> and apps from there too.

This is easy, but would need that fancier client GUI to be useful. (It'd just be equivalent to 'ssh <host> env DISPLAY=<...> nohup <command>', anyway, except that if we built it in, then a client could make that request over its existing ssh connection without spawning a new one. Of course, you could just run a terminal under xpra too.)

Google releases Neatx NX server

Posted Jul 28, 2009 13:34 UTC (Tue) by kh (subscriber, #19413) [Link]

I would really like to have some connection program like crossloop.com so I could remote in to help family members who have switched to Linux when we are both behind NAT'd firewalls.

Adding some SPICE to this conversation?

Posted Jul 25, 2009 14:41 UTC (Sat) by dowdle (subscriber, #659) [Link]

What I'd like to see is a comparison of the commercial NX (and any of the free projects that come from it) to the SPICE protocol developed by the Qumranet folks and currently being integrated into the upcoming RHEL 5.4 release.

According to a SolidICE demonstration video from some time ago on the Qumranet website (http://us.qumranet.com/videos/Qumranet.wmv), they claim the SPICE protocol can do a four head display, support bi-directional audio, and full screen HD video on one of the displays... at local computer speeds. Of course that is over a LAN but one would hope that it work well over WAN sans multimedia. The SPICE protocol is patented but hopefully given Red Hat's track record they'll open up the protocol with a reasonable license? SPICE is being used as a delivery method for KVM-based virtual desktops but surely it could be decoupled and made into a general purpose remote desktop type client/server scenario.


Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds