Google releases Neatx NX server
LWN.net needs you! Without subscribers, LWN would simply not exist. Please consider signing up for a subscription and helping to keep LWN publishing |
On July 7, internet search giant Google not only announced its operating system Google Chrome OS with much fanfare, it also quietly released Neatx, an open source NX server. According to the announcement, Google has been looking at remote desktop technologies for quite a while. While the X Window System has issues with network latency and bandwidth, the NX protocol compresses X requests and reduces round-trips, resulting in much better performance — to the point that it can be used over network connections with low bandwidth.
So with Neatx, users can log in to a remote Linux desktop. Moreover, the session can be suspended and resumed later from another computer, resembling the functionality that GNU screen offers for console sessions. But, unlike screen, a Neatx user has access to the GUI of the remote machine, just as if they were sitting in front of it.
The NX protocol, using SSH as a transport and for authentication, was developed by the Italian company NoMachine, which released the source code of the core NX technology in 2003 under the GPL. NoMachine offers free (as in beer) client and server software for various operating systems, including Linux. It wasn't very long before free-as-in-speech NX clients emerged, then, in 2004, Fabian Franz implemented FreeNX, a GPL implementation of an NX server.
FreeNX development stalls
However, after a number of years the FreeNX project is facing some serious problems. Franz hasn't responded to e-mails on the developer mailing list for a long time and he seems to be the only one able to check code into the repository. As a consequence, the development has stalled for some time. That brought Florian Schmidt to ask about the future:
Because upstream FreeNX development has stalled, downstream
packagers have essentially picked up the development. There is a FreeNX
team that
maintains Debian and Ubuntu
packages. These maintainers push appropriate patches to their branch
and thus have the most up-to-date repository, with some extra features the
official FreeNX server doesn't have, such as shadowing local X sessions and
stubs for guest sessions. Marcelo Boveto Shima, one of the maintainers,
noted
FreeNX problems in a post to
the FreeNX mailing list: "Working on FreeNX is a dead-end and
it is becoming too hackish.
" He decided to write his own FreeNX
server, TaciX. In the meantime,
the Debian/Ubuntu repository has become the "upstream"
for Gentoo's FreeNX package.
A new NX server from scratch
Shima wasn't the only one
disappointed in FreeNX development. According to Google the server was
"written in a mix of several thousand lines of BASH, Expect and C,
making FreeNX difficult to maintain.
" That's why some developers at
Google designed Neatx, a new implementation,
based on NoMachine's
open source NX libraries:
Google implemented Neatx because the company operates a large number of virtualized workstations in clusters [PDF], running on its cluster virtual server management software tool, Ganeti. To be able to log in to the virtual workstation from home or via a wireless connection and work smoothly, X or VNC can't be used. That led Google to turn to the NX protocol. An added bonus is that the protocol allows restoring a session opened at the office from home and vice versa. In the release announcement, the developers noted that Neatx implements some features not found in FreeNX, but also that it lacks some other features that FreeNX has.
Neatx in action
Your author tried both QtNX and NoMachine's NX client to connect to FreeNX 0.7.3 and Neatx on Ubuntu 9.04. Because Neatx has not yet released an official version, your author checked out the latest source code and built it. It turned out QtNX can't connect to Neatx because of a version mismatch, and the Neatx developers seem to test their server software with NoMachine's NX client, so that is the only supported client for now.
Session creation, suspension, resumption, and shutdown all work well in Neatx. Users can choose between Gnome, KDE, Application, and Console sessions, and they can run their session on a virtual desktop or as a floating window. They are also able to set the keyboard preferences, the resolution, and choose full-screen mode. Neatx supports session shadowing, the ability for multiple users to view and collaborate within the same NX session. For the moment that only works with sessions belonging to one user, so it's not that usable yet. Sharing of the X clipboard also works flawlessly.
A couple of things don't work yet. For example, terminating an open session from the session list isn't possible. The user first has to resume the session and then terminate it. Tunneling of sound, printers, and Samba are also not yet implemented. And Neatx doesn't support RDP (the remote desktop protocol for Windows) or VNC sessions, something that FreeNX does support. There are also still some loose ends because the code is still alpha. However, the Neatx Google Group is pretty active and already has some interesting suggestions for further developments, such as a jailed NX, enabling users to NX into a server while not being able to see any other user's data, and printer tunneling.
Although the simultaneous announcements of Google Chrome OS and Neatx seem to be pure coincidence, they both are based on the concept of a thin client. Chrome OS is a perfect operating system for the casual user with a netbook connected to internet, running most of the applications in a web browser. For applications that don't run inside the browser, a Neatx server on Google's or someone else's servers can offer a desktop "in the cloud" which can be accessed from everywhere. Google's own use of Neatx for virtual workstations shows that the thin client concept is reviving. Hopefully it will also revive developer's interest in contributing to a free NX server, which is an essential component for this development.
Index entries for this article | |
---|---|
GuestArticles | Vervloesem, Koen |
(Log in to post comments)
Google releases Neatx NX server
Posted Jul 24, 2009 22:06 UTC (Fri) by ejr (subscriber, #51652) [Link]
Google releases Neatx NX server
Posted Jul 27, 2009 16:44 UTC (Mon) by iabervon (subscriber, #722) [Link]
Google releases Neatx NX server
Posted Jul 24, 2009 22:30 UTC (Fri) by MattPerry (guest, #46341) [Link]
Google releases Neatx NX server
Posted Jul 24, 2009 22:38 UTC (Fri) by dlang (guest, #313) [Link]
on signficant thing that NX does is to provide a local server to act as a proxy for these sorts of things, if it knows the answers already it provides them without having to actually go and ask the display.
'fixing the X protocol' to do the same thing would end up looking very similar, a local daemon that local applications think is their display, that then remembers the answers from prior requests.
you can't eliminate these calls without eliminating backwards compatibility, and so far nobody has been willing to do that.
Google releases Neatx NX server
Posted Jul 26, 2009 4:10 UTC (Sun) by sbergman27 (guest, #10767) [Link]
the biggest problem with X is that it frequently takes a _lot_ of round trip messages to do standard things.
"""
True. Watching an X session over a dial up modem (external, with tx/rx lights) is quite interesting. Not only are there a lot of round trips, but it appears that everything happens in serial fashion with only one request or response in the pipe at any given time. In the modem's LEDs you can clearly see "request, response, request, response, request, response...".
Many people think that X needs bandwidth. It actually isn't such a bandwidth hog. Others correctly point out the round trip issue. But rarely have I heard anyone comment upon the serialized nature of the protocol. And that looks like the real performance killer to me. At least over any sort of WAN. At Ethernet latencies it's likely not a problem at all.
Google releases Neatx NX server
Posted Jul 26, 2009 9:28 UTC (Sun) by njs (guest, #40338) [Link]
That's why relatively crude protocols like VNC can completely outclass X -- sure, now you're stuffing giant blocks of pixels down the network pipe and taking way more bandwidth, but those giant blocks of pixels have no dependencies -- so instead of waiting around all the time, you can just saturate the pipe. (rsync uses a similar strategy; see also pipelined SMTP, IMAP, HTTP, ...)
Google releases Neatx NX server
Posted Jul 26, 2009 17:38 UTC (Sun) by elanthis (guest, #6227) [Link]
Google releases Neatx NX server
Posted Jul 26, 2009 18:58 UTC (Sun) by sbergman27 (guest, #10767) [Link]
Google releases Neatx NX server
Posted Jul 27, 2009 9:34 UTC (Mon) by PO8 (guest, #41661) [Link]
Very bad idea
Posted Jul 27, 2009 12:31 UTC (Mon) by khim (subscriber, #9252) [Link]
A proxying solution like NX can perhaps help to tackle the problem with less effort than optimizing the X client side, serving as a stopgap until everybody's latency is so low that no one cares anymore.
Latency will never go away. That's just fact of life. Speed of light is 300'000km/sec and Earth 40'000km thus minimum possible worst-case latency is 130ms. In reality it can be somewhat reduced by using few datacenters (like Google does), but it can not be reduced to make remote usage of X server "in the cloud" possible...
Very bad idea
Posted Jul 27, 2009 23:16 UTC (Mon) by marcH (subscriber, #57642) [Link]
... in vacuum. In fiber or electrical cables, it is typical 200,000. That's 5 milliseconds per 1000km. Multiply by two to get the round trip time, and multiply once again by (typically) two to account for buffering in processing in active network nodes.
Very bad idea
Posted Jul 31, 2009 23:24 UTC (Fri) by sbergman27 (guest, #10767) [Link]
Very bad idea
Posted Aug 1, 2009 9:59 UTC (Sat) by modernjazz (guest, #4185) [Link]
for us.
Sorry, bad math
Posted Aug 2, 2009 4:40 UTC (Sun) by khim (subscriber, #9252) [Link]
Why run a 40,000km cable when you could just run a 3 foot cable the opposite direction?
If you'll use 3 feet cable you'll be 20'000km from destination. You get 40'000km when two points are antipodes: it does not matter which way you'll go it's 20'000km one way and then 20'000km another way - direction is irrelevant. The only way to reduce distance is to burrow a hole and the depeest hole known to a man (less then 15km) does not shrink the distance all that much...
Exactly!
Posted Aug 9, 2009 12:12 UTC (Sun) by khim (subscriber, #9252) [Link]
The device capable of burrowing straigh through Earth? I know few guys who'll pay BIG BUCKS for such thing - where are selleing it?
And if you are not selling then the only way to go to the antipode is along the Earth surface - so yes, 40'000km (give or take) is shortest path available...
Exactly!
Posted Aug 13, 2009 19:46 UTC (Thu) by hummassa (guest, #307) [Link]
The distance between antipodes is half that - 20000km ALONG THE EARTH SURFACE. It would be ~12000km thru the center of the Earth.
Exactly!
Posted Aug 13, 2009 19:58 UTC (Thu) by johill (subscriber, #25196) [Link]
Google releases Neatx NX server
Posted Jul 29, 2009 8:54 UTC (Wed) by mjthayer (guest, #39183) [Link]
Google releases Neatx NX server
Posted Jul 29, 2009 17:59 UTC (Wed) by dlang (guest, #313) [Link]
the vast majority of people only use X on a local machine and never touch the network
this is especially true for developers.
it's also true that most people have no problem with the bloat of current desktop software because they run on recent machines that are fast and have lots of ram.
in both cases this doesn't mean that there isn't a problem, and that fixing the problem wouldn't improve things for everyone with drastic improvements for some users (possibly drastic enough to open an entire new category of use), bit just means that unless it's pointed out to people and they are encouraged to _try_ the use-cases that have problems they will never notice them.
personally I suspect that the linux desktop bloat had some impact on the weak showing of linux on netbooks. most linux distros and desktop environments really want more resources than a netbook has. Linux can run fine on that sort of hardware (I know, I used much lesser hardware for many years), but the current crop of desktop environments really don't care about resource use.
Google releases Neatx NX server
Posted Jul 29, 2009 19:38 UTC (Wed) by mjthayer (guest, #39183) [Link]
Google releases Neatx NX server
Posted Jul 29, 2009 20:46 UTC (Wed) by dlang (guest, #313) [Link]
if you add a fraction of a second latency you shouldn't notice it for most things, but if you see points in your code start taking significantly more time they are dong many serialized round trips.
Google releases Neatx NX server
Posted Jul 24, 2009 23:20 UTC (Fri) by ejr (subscriber, #51652) [Link]
However, to many the critters have left the barn on X over the network. DRM. Fonts. There's enough pain to make people hesitate.
Google releases Neatx NX server
Posted Jul 25, 2009 12:36 UTC (Sat) by nix (subscriber, #2304) [Link]
than the server (or both, if you want core fonts to work too, but that's
getting quite unimportant these days).
Google releases Neatx NX server
Posted Jul 26, 2009 13:31 UTC (Sun) by sbergman27 (guest, #10767) [Link]
Fonts: put them on the client, rather than the server (or both, if you want core fonts to work too,
"""
Can't you just:
Xorg -query myserver.localdomain -fs tcp/myserver.localdomain:7100
and make sure xfs is running on myserver (and listening on tcp:7100) and have all the server's installed fonts available everywhere? One shouldn't have to upgrade a thin client once it is in place. Only the server.
Google releases Neatx NX server
Posted Aug 11, 2009 15:41 UTC (Tue) by Lurchi (guest, #38509) [Link]
Terminal Server == X client
Terminal (Thin) Client == X display server
So you install the fonts on the terminal server (once) and the program
will render the same on every client.
Google releases Neatx NX server
Posted Jul 27, 2009 15:31 UTC (Mon) by ejr (subscriber, #51652) [Link]
The font problem isn't technical, it's licensing. If you want more than bitmaps in the US, you need the appropriate license for the font. Elsewhere, the font needs licensed regardless of use pre- or post-rendering. There are good, free fonts now, but I'm sure you know apps that aren't agreeable to using anything but their pre-defined, proprietary fonts. Again, these issues could be managed, but they're a pain. Just configuring font matching is enough of a pain, anyways (not a dig, it's a difficult problem).
A VNC-like proxy sidesteps most of these. An NX proxy, well, it still runs into problems with heavy graphics (scientific visualizations, how I use it), but it does make network use feasible again.
Google releases Neatx NX server
Posted Jul 25, 2009 3:31 UTC (Sat) by dkite (guest, #4577) [Link]
before that, looks like 2003ish.
It may be that fixing the protocol within the X community wasn't possible.
After all it wasn't the only thing needing fixing, and xorg was forked to
make the necessary reworking possible.
Nx is very quick and responsive. It did a very nice job with resolutions
making my netbook work very well over a slow link to my desktop machine.
I wonder if google is going to offer a virtual desktop service for their
netbook OS? I was poking around setting such a thing up for myself but ran
into the not finished parts of the free nx server.
Derek
What's the point?
Posted Jul 25, 2009 5:03 UTC (Sat) by khim (subscriber, #9252) [Link]
I mean: what can they offer to Chrome OS users? Typical Linux desktop? 2007 showed that users are not interested in that. It looks like typical internal projects: engineers are familiar with Linux and I guess use Linux in Google a lot, so it makes sanse to offer them virtualized Linux with Neatx. As for Chrome OS... this is OS for "mere mortals", right? They have no use for that stuff - uless it's Windows-based virtual computer. And the last thing Google wants is to promore Windows further.
This being said I'm pretty sure someone will provide such service. But for that they'll need some free client ported to NaCl and right now Neatx only works with proprietary client... oops?
The reverse
Posted Jul 25, 2009 8:15 UTC (Sat) by man_ls (guest, #15091) [Link]
What about remote support for Chrome OS? Manual intervention does not look like a Google thing, but for a price -- who knows.
Why will you need manual intervention?
Posted Jul 25, 2009 9:00 UTC (Sat) by khim (subscriber, #9252) [Link]
Manual intervention does not look like a Google thing, but for a price -- who knows.
Hmm... Why do you need the NX for that? You can throw away the whole thing and your data will still be available in the cloud. All data on the netbook itself are just a cache for the data in the cloud. Sure if your last two weeks spent in the wilderness are extemely valuable - you'll be able to find someone who'll replain the Chrome OS for a price, but for most users... just click "reinstall" button - local data will be wiped and clean, updated version of OS will be available in a few minutes...
And if your OS is broken beyond the ability to synchronize... the chances are high it's broken beyond the ability to work with NX too...
What's the point?
Posted Jul 26, 2009 14:57 UTC (Sun) by dkite (guest, #4577) [Link]
someone could even make it work at all, but running into problems that
make it impractical to use.
I've used the spreadsheet a bit. I have a couple that are very simple and
work fine, and one that is multipage and a bit complex. It doesn't.
They were trying to use existing installed base browsers to expand their
application suite. The server based idea works well for some things, but
the browser is a poor ui for complicated apps.
What if you opened google chrome and got a remote X spreadsheet running
somewhere.
Derek
What's the point?
Posted Jul 27, 2009 18:42 UTC (Mon) by jzbiciak (guest, #5246) [Link]
I suspect offering GNOME or KDE aren't the targets. Offering something built around some of the same core infrastructure (X, Linux) that GNOME and KDE built on seems entirely reasonable and likely.
Bad and complex architecture
Posted Jul 25, 2009 8:37 UTC (Sat) by astrand (guest, #4908) [Link]
VNC, on the other hand, is a much more cleaner solution, where you have only one Xserver (Xvnc), running on the server side. The protocol is truly thin. The performance is not dependent on well behaved applications. Since the protocol is so minimal, implementing clients is easy. And with the recent TurboVNC/TigerVNC developments, you can achieve amazing performance which allows usage of 3D-heavy or video applications.
Bad and complex architecture
Posted Jul 25, 2009 12:21 UTC (Sat) by sbergman27 (guest, #10767) [Link]
And with the recent TurboVNC/TigerVNC developments, you can achieve amazing performance
"""
Better than it used to be, maybe. But not "amazing" enough. Our desktops at branch offices are delivered by FreeNX. It performs quite well. I just did a mini-evaluation of TigerVNC based on your recommendation, and there is no way my users would find the performance acceptable for their daily use. Even the best VNC clients/servers today are too laggy for that.
NX *is* more complicated than I would like, which is why I jumped to try Tiger after reading your post. But Tiger is obviously not the solution.
Bad and complex architecture
Posted Jul 25, 2009 12:55 UTC (Sat) by nix (subscriber, #2304) [Link]
repeatedly repaint hunks of the screen with the same GUI elements for no
good reason are very slow/bandwidth-hungry with VNC, because it's too
simpleminded to be able to tell that this is a no-op. (Some Java apps seem
to do this quite a lot, e.g. the abomination which is Oracle Forms.)
Bad and complex architecture
Posted Jul 25, 2009 16:52 UTC (Sat) by lab (guest, #51153) [Link]
How does technology like pure X or NX fare with these type of apps?
Bad and complex architecture
Posted Jul 25, 2009 17:12 UTC (Sat) by astrand (guest, #4908) [Link]
From a technical viewpoint, Xvnc does (by default) pixel-by-pixel comparisons on the framebuffer. So unless you have explicitly disabled this (-CompareFB=0), or have some strange client, I really don't understand why you are experiencing this behaviour.
It's better to have half a cake then no cake at all
Posted Jul 27, 2009 6:02 UTC (Mon) by khim (subscriber, #9252) [Link]
Every time I've tested NX, I tried a Java based application and confirmed the bad performance.
Don't use such applications then. Sure, badly-written applications are unusable with NX - and most Java-based applications are pigs in regard to any and all resources. But with VNC all applications are equally unusable!
Bad and complex architecture
Posted Jul 25, 2009 17:20 UTC (Sat) by astrand (guest, #4908) [Link]
For many (or even "most" in my context) customers/cases, the VNC bandwidth requirement is no problem. I can say this after delivering VNC based solutions to customers for the last 7 seven years.
What kind of bandwidth do you have?
Bad and complex architecture
Posted Jul 25, 2009 18:23 UTC (Sat) by sbergman27 (guest, #10767) [Link]
There are certainly cases where NX works better than VNC: Say, with well-behaved applications on very slow connections.
"""
Oh, come on. The only instance in which VNC *might* outperform NX is on a LAN. And my bet would still be on NX there. We used RealVNC and then TightVNC over T1 connections for some months and we found them to be quite unacceptable for performance, and even worse (far worse) for display quality. This is for general business desktop use. Web, email, OO.o, lots of PDF viewing, often parts manuals with lots of scanned images. Some IE under Wine (unfortunately). We throw a lot at our remote displays and NX shines where VNC chokes. The mini-eval I just ran on TigerVNC reported 2047kb. It was better than when we used VNC several years ago. The video quality was good, as it immediately selected a sufficient color depth. But the performance was markedly inferior to a side by side NX session.
Regarding well vs poorly behaved apps... if there is an app that chokes NX but not VNC, I have yet to see it. Tiger/Turbo may, indeed, do well on 3D over a LAN. But my business desktop clients have generally opted against providing Doom 3 to the thin clients. Out of curiosity, what customers do you have who are interested in FPS, and what are they doing?
Bad and complex architecture
Posted Jul 26, 2009 18:01 UTC (Sun) by astrand (guest, #4908) [Link]
When you say "2047kb", are you referring to the "kbit/s" measurement available in the client? This is not a good measurement of how much bandwidth VNC "require".
""
Out of curiosity, what customers do you have who are interested in FPS, and what are they doing?
""
Video playback (Youtube, mplayer), custom OpenGL applications for CAD/CAM/visualisation, Catia etc.
Bad and complex architecture
Posted Jul 27, 2009 14:22 UTC (Mon) by sbergman27 (guest, #10767) [Link]
When you say "2047kb", are you referring to the "kbit/s" measurement available in the client? This is not a good measurement of how much bandwidth VNC "require".
"""
I'm saying that 2047kb was the bandwidth that TigerVNC reported that it had to work with over that connection, which should presumably be the same that NX had to work with over that same connection. And for business desktop use, VNC was pretty clunky and unresponsive compared to NX. As we have no legitimate business need for video on our business desktops, we don't allow it. So I don't have a base of experience as to how well or poorly NX performs relative to *vnc for that use.
BTW, I tried TigerVNC rather than TurboVNC specifically to avoid the excuse that VNC sever/client Q still uses the old 3.X protocols. Tiger uses V4.
Bad and complex architecture
Posted Jul 25, 2009 19:29 UTC (Sat) by nix (subscriber, #2304) [Link]
VNC are seriously painful. Modem links are utterly intolerable.
(Mind you, I've never tried NX on either of these.)
Bad and complex architecture
Posted Jul 25, 2009 19:54 UTC (Sat) by sbergman27 (guest, #10767) [Link]
Even ADSL has high enough latency that raw X or
VNC are seriously painful. Modem links are utterly intolerable.
"""
Raw X actually works remarkably well over a direct modem to modem ppp connection. There the hardware compression really helps the bandwidth, and latency is nearly instantaneous. (Ping time may be 100ms, but that's just the time it takes to actually transmit 64 bytes back and forth. The real latency is almost nonexistent.) Over a modem connection through an ISP and over the internet, both VNC and X are intollerable. Raw X being worse than VNC. X's main problem is, of course, latency and not bandwidth.
I have used NX over 56k modem connections (typically 45kbps) for full desktop, fullscreen sessions, and performance is remarkably good. I sure wouldn't want to use it all day. But it's pretty serviceable. Framebuffer strategies simply cannot beat the combination of aggressive, context-aware compression and aggressive caching. To get good performace, you have to be operating at the X protocol level.
NX memory usage is also significantly better. This makes a difference if you run a lot of desktops. My largest server (server in this context meaning the machine running the actual apps) runs about 65 simultaneous Gnome sessions.
Bad and complex architecture
Posted Jul 25, 2009 22:42 UTC (Sat) by nix (subscriber, #2304) [Link]
probably much better.
Bad and complex architecture
Posted Jul 26, 2009 18:11 UTC (Sun) by astrand (guest, #4908) [Link]
Framebuffer strategies simply cannot beat the combination of aggressive, context-aware compression and aggressive caching. To get good performace, you have to be operating at the X protocol level.
""
Xvnc can basically operate on the X level: The Xvnc server is aware of which drawing operation is performed. For example, it can instruct the client to move an existing rectangle, rather than repaint/resend it.
Still, there are many more techniques (including caching) that could be added to VNC to enhance performance even further. Prototypes are already available.
""
NX memory usage is also significantly better. This makes a difference if you run a lot of desktops. My largest server (server in this context meaning the machine running the actual apps) runs about 65 simultaneous Gnome sessions.
""
Compared to what? As far as I know, GNOME uses the same amount of memory regardless of if the Xserver is Xvnc or the NX one. It is really GNOME and its applications that consumes the memory; not the Xserver. Not with Xvnc, at least.
Bad and complex architecture
Posted Jul 27, 2009 17:57 UTC (Mon) by sbergman27 (guest, #10767) [Link]
Compared to what? As far as I know, GNOME uses the same amount of memory regardless of if the Xserver is Xvnc or the NX one. It is really GNOME and its applications that consumes the memory; not the Xserver. Not with Xvnc, at least.
"""
Compared to the RealVNC, which is what I have actual experience with in an environment in which they could be directly compared. And I think you might be surprised, in an environment with many desktop sessions running, just how low the requirements of the actual desktop are when shared memory savings are figured in. At the time that I ran the comparison, I believe we were running about 74MB per desktop user. (4096MB ram for 55 users. And yes, significant swap was used, although the actual swapping overhead was relatively low.)
I don't remember the actual numbers, but the nxagent and Xvnc processes tended to be at the top of the 'top' output when sorted by memory use. And the (res - shared) values were consistently better for nxagent than for Xvnc. NX is doing something more efficiently. I'm not sure what.
Bad and complex architecture
Posted Jul 27, 2009 19:37 UTC (Mon) by astrand (guest, #4908) [Link]
I don't remember the actual numbers, but the nxagent and Xvnc processes tended to be at the top of the 'top' output when sorted by memory use. And the (res - shared) values were consistently better for nxagent than for Xvnc. NX is doing something more efficiently. I'm not sure what.
""
Perhaps this is due to the fact that NX has multiple process (NX agent and NX proxy)? You'll need to count both for a valid comparison.
In any case, the Xvnc memory usage is no deal. On my Fedora 11 system, the local Xorg now consumes 63 MiB (resident), while Xvnc only consumes 34 MiB.
Bad and complex architecture
Posted Jul 30, 2009 4:34 UTC (Thu) by sbergman27 (guest, #10767) [Link]
For X, most of that "resident" is resident *video ram*, often mapped multiple times, and not system RAM. X servers always look like they are using a lot more system than they really are.
At any rate, when we switched from VNC to NX, I recall that the reduction in memory use was clearly discernable in the systat reports. Note that the 34MB for the Xvnc process represents 3.4GB with a hundred desktops running. When people tell me that program Q "only" consumes 34MB, I automatically multiply by 70 to see what the *real* consumption would be if my users were running it on my most heavily loaded server. It makes a difference.
Bad and complex architecture
Posted Jul 30, 2009 7:35 UTC (Thu) by astrand (guest, #4908) [Link]
No nxproxy. Just nxagent and nxnode, with nxnode using less than a meg of res - shared.
""
As I understand it, nxnode is a shell script, so it's no surprise that it consumes less memory than a full Xserver...
Why are you not running a nxproxy?
The nxagent should be the Xserver, so that is what should be compared to Xvnc. One theory why it consumes less memory than Xvnc is that it (AFAIK) is based on a very old X implementation, X.Org 6.9 or something like that. Back when we delived Xvnc based on that old implementation it was also more lightweight, consumed less than 10 MiB or so. But of course, it also lacked many modern X extensions.
It's not the VNC part of Xvnc that consumes the RAM; it's the X server.
I agree with you that small figures can turn into large numbers when you multiple with the number of users. But I still claim that even 28 MiB (RES-SHR, right now) is not a big deal if you are running a full, modern desktop environment and mainstream applications. These typically consumes much, much more.
Bad and complex architecture
Posted Jul 30, 2009 15:11 UTC (Thu) by sbergman27 (guest, #10767) [Link]
Apparently the functions of nxproxy have been subsumed by nxagent. We're running freenx 0.7.3 with nxlibs 3.3. NX has become somewhat simpler.
Although we have no need for multimedia, and in fact actively discourage it, I did run a side by side comparison of NX and Tiger last night using this amusing and endlessly fascinating video:
http://www.elphel.com/3fhlo/samples/333_samples/m021_300_...
This is on a 1680x1050 screen. It comes up at 320x240 or so. And Tiger reports a 2177kbit connection speed. At that size, both NX and Tiger are jerky. I clearly see each frame update. However, the Tiger instance seems to update at a more even rate. The NX framerate jumps around a lot, which is annoying. I'd give VNC the edge, there. I would say that Tiger is barely usable at that size. If I maximize the totem window, or go full screen, both Tiger and NX go down the toilet. Maybe 2 frames per second. In fact, neither Tiger nor NX are usable if I increase much at all over 320x240.
Interestingly, if I let NX run it through several times, it eventually will play at any size, even full screen, with silky smoothness. Tiger can't begin to match NX's client-side caching. But of course, for this use, that's cheating.
Running this test, I spent a good bit of time on both types of remote desktop. And did some more testing of things like browsing the web and scrolling through PDFs. With NX, it's so easy to forget I'm not on the local machine, that I *have* to make sure to use a different wallpaper remotely and locally. And even then I have to stop and remind myself whether I'm remote or local. With Tiger, I can *never forget* that I'm on a laggy remote connection. My users would storm in and lynch me if I tried to saddle them with it.
What I get out of all of this is that on a 2 mbit connection with a 75ms ping time:
- NX is *far* superior for normal business desktop work. (Like night and day. There's no comparison.)
- For videos, Tiger is about the same speed (or perhaps slightly faster) and notably smoother for very small videos.
- Neither NX nor Tiger are usable at all for videos much beyond 320x240.
- FreeNX is getting simpler. And Neatx is about to take that simplification to the next level.
FWIW, I do use VNC to remotely administer the legacy windows clients.
Bad and complex architecture
Posted Jul 30, 2009 18:00 UTC (Thu) by astrand (guest, #4908) [Link]
""
...when we made the VNC->NX switch, we were 32 bit and running about 50 desktops on 4GB memory. So aout 82MB per user. (Surprised?)
""
Not really. There are many different use cases, and this memory usage is actually what we recommended a few years ago (we recommended about ~50 MiB per user). But since, the desktop and applications have started to grow. This is why we are now recommending 100-150 MiB instead. This is enough to cover a typical rich desktop with heavy applications such as OpenOffice, Firefox, Google Earth etc. But of course, for other users it may still be perfectly fine with less than that.
Regarding video: Does totem change the resolution to 320x240 before playing the video? If not, 1680x1050 is what's going to be used. I understand if this gets jerky on 2 Mbit. In general, video playback only works well on LANs.
We are regularly testing video with sound in 1024x768 on that works great, but on a LAN.
Bad and complex architecture
Posted Sep 14, 2009 12:21 UTC (Mon) by sushisan (guest, #60822) [Link]
We have a client with 8 terminals only. For the migration we have a server with a Phenom X4 with 8Mb ram
We use freenx in the server (F11) and the nx client in the first time.
The bandwith usage is very good, even with a SLOOOOOOOOW internet conection.
But the memory usage is a disaster!!! The nxagent can't stop to raise the memory usage until start to swap the system with or without usage in the system (only with a connection active)!!!
I can't find any solution to this that do even when I switch off the cache.
Now we've migrated to Xvnc and work pretty good
Bad and complex architecture
Posted Jul 30, 2009 15:32 UTC (Thu) by sbergman27 (guest, #10767) [Link]
Bad and complex architecture
Posted Jul 30, 2009 18:14 UTC (Thu) by astrand (guest, #4908) [Link]
I'm trying not to "sell" ThinLinc too much in this forum, but since you brought this up: In ThinLinc, we are implementing access to local devices using completely separate protocols, all running on top of SSH. This allows us (just like NX) to use existing and open protocols. But unlike NX, we use the real upstream versions, instead of forks. This allows us to work close with the community.
I haven't checked what NX provides nowadays, but I'm quite sure that we actually have better support for local devices. For example, we are supporting:
* Sound (using PulseAudio, superior to ESD).
* Serial port redirection
* File access (using NFSv3/unfs3, much more lightweight than Samba)
* Local printing (no local configuration necessary)
* Smart card redirection and authentication
All of the above are supported both on Windows and Linux. We have clients for other platforms such as Solaris and Mac OS X (but not yet with full support for all types of local devices.)
Bad and complex architecture
Posted Jul 30, 2009 19:36 UTC (Thu) by sbergman27 (guest, #10767) [Link]
Bad and complex architecture
Posted Jul 30, 2009 20:06 UTC (Thu) by astrand (guest, #4908) [Link]
Bad and complex architecture
Posted Jul 30, 2009 20:54 UTC (Thu) by sbergman27 (guest, #10767) [Link]
I'm trying not to "sell" ThinLinc too much in this forum, but since you brought this up:
"""
Well, it was probably inevitable from the time that you, Peter Åstrand, "Chief Developer" for Cendio's ThinLinc product, a proprietary competitor to NX, started a thread on LWN.net entitled "Bad and complex architecture", trashing NX, and talking up TigerVNC more than its actual performance justifies, without really disclosing who you were until finally dropping a hint long after everyone else had moved on from this thread.
Shame on you. When we think Microsoft is doing that we call it "Astroturfing".
I have found our conversation enjoyable and enlightening, and my resulting research beneficial and educational. But I can't help feeling annoyed.
Bad and complex architecture
Posted Jul 30, 2009 21:54 UTC (Thu) by astrand (guest, #4908) [Link]
Bad and complex architecture
Posted Jul 30, 2009 22:37 UTC (Thu) by sbergman27 (guest, #10767) [Link]
I'm writing this in my spare time, as a private individual...
"""
... with a direct and substantial financial interest in discrediting the competition. And if you were to let people know about your direct and substantial financial interest up front you would have less credibility when doing it.
Who it is who pays for your LWN subscription, and whether it happens to be during business hours or not is completely beside the point.
And when such clear conflict of interest exists, it *is* customary to declare such.
disclaimers
Posted Jul 31, 2009 0:47 UTC (Fri) by xoddam (subscriber, #2322) [Link]
Bad and complex architecture
Posted Jul 31, 2009 13:29 UTC (Fri) by sbergman27 (guest, #10767) [Link]
Bad and complex architecture
Posted Jul 31, 2009 16:26 UTC (Fri) by sbergman27 (guest, #10767) [Link]
Bad and complex architecture
Posted Aug 7, 2009 13:31 UTC (Fri) by deleteme (guest, #49633) [Link]
Bad and complex architecture
Posted Jul 26, 2009 8:30 UTC (Sun) by tialaramex (subscriber, #21167) [Link]
Years ago I watched someone play my copy of GLQuake from their machine over GLX quite comfortably. It wasn't quite as fast as a local copy of the game, but it was very playable. The VNC solution, in addition to chewing all the CPU on both machines and _still_ requiring a fast OpenGL implementation on the machine running the game, would have also chewed up all the network bandwidth, and I'm not convinced it would even have been playable.
My own OpenGL software is much less pretty than Quake, but it has practical reasons to be used over a network. Runs fine, but in VNC it would definitely incur more or less fullscreen refreshes at 60Hz. That's a LOT more bandwidth and CPU than a few thousand vertices every so often and a set of transform matrices per frame.
Maybe you have a special definition of "outperform" but for me if you needed many times more resources to get the same result, that is less rather than more performance.
Bad and complex architecture
Posted Jul 26, 2009 14:06 UTC (Sun) by mjg59 (subscriber, #23239) [Link]
Bad and complex architecture
Posted Jul 26, 2009 18:18 UTC (Sun) by astrand (guest, #4908) [Link]
If you want to use VirtualGL with VNC, however, you'll need an accelerated implementation (TurboVNC/TigerVNC/Sun Shared Visualization/ThinLinc) to get good performance.
Google releases Neatx NX server
Posted Jul 25, 2009 8:51 UTC (Sat) by njs (guest, #40338) [Link]
For those unfamiliar with NX's innards, 'nxagent' is the somewhat opaque name that NoMachine gave to the actual X server that's been hacked to run headless, display on other X servers via the NX protocol, and reconnect to the remote X server open request -- i.e., the part that provides the core functionality. What Google's released is a rewrite of the management scripts that surround this core -- the way you connect to an NX session is that you ssh in as the magic user 'nx', and then speak a special protocol to log in again as your actual user, and then speak another special protocol to manage sessions, and then get a proxy to the actual underlying nxagent. Various programs are needed to implement these protocols, spawn and manage the actual nxagents, proxy to them, etc. The freenx implementations are rather grotty, as mentioned, and Google's look much cleaner. Really what *I* want is a way to skip them altogether, though, both aesthetic and practical grounds.
Unfortunately, nxagent itself is (last I checked) a forked version of the old XF86 monolithic tree, developed by a "occasionally throw a big pile of undocumented tarballs over the wall" method, and it *should* make it easy to get a simple persistent X session running under a single user account, there are some restrictions that make it hard. I don't remember exactly what problems I ran into anymore -- hopefully someone will correct me if I'm wrong -- but IIRC you cannot *start* a session in headless mode, only start it in attached mode and then disconnect, and if you tell it that you want to reconnect but fail to do so within 30 seconds, it will just quit and take down all your running programs. If your wireless drops out at the wrong time then sucks to be you.
(Standard disclosure/shameless plug: I was so frustrated that I wrote a competitor, 'xpra'.)
Google releases Neatx NX server
Posted Jul 25, 2009 13:24 UTC (Sat) by rvfh (guest, #31018) [Link]
Ideally you'd want to initiate the server from a client, and apps from there too.
Anyway, I'll see how fast it is from work, and if it is, then xpra will become part of my list of must-install packages!
Google releases Neatx NX server
Posted Jul 26, 2009 3:26 UTC (Sun) by njs (guest, #40338) [Link]
There isn't any whizzy GUI front-end right now, no -- you just say 'xpra attach ssh:whereever' (or click on a shortcut to that), and Control-C to kill it. But the client is <500 lines of Python, so it wouldn't be hard to add more bells and whistles if someone cares to do so.
> Ideally you'd want to initiate the server from a client
Good point! I wrote the patch for this... but then realized that there's a problem: the client doesn't necessarily know where the xpra executable is installed on the server, so it can't necessarily find it to start it. You can work around this by requiring that it always be installed in a well known location, but then we're back to the heavy-weight NX-style setup (I don't have root on the server where I use xpra!). And typing 'ssh whereever xpra start' isn't really harder than typing 'xpra start ssh:whereever' anyway.
(This problem doesn't arise when connecting to an already running server, because the server does some magic at start-up time so the client can always find it.)
> and apps from there too.
This is easy, but would need that fancier client GUI to be useful. (It'd just be equivalent to 'ssh <host> env DISPLAY=<...> nohup <command>', anyway, except that if we built it in, then a client could make that request over its existing ssh connection without spawning a new one. Of course, you could just run a terminal under xpra too.)
Google releases Neatx NX server
Posted Jul 28, 2009 13:34 UTC (Tue) by kh (subscriber, #19413) [Link]
Adding some SPICE to this conversation?
Posted Jul 25, 2009 14:41 UTC (Sat) by dowdle (subscriber, #659) [Link]
According to a SolidICE demonstration video from some time ago on the Qumranet website (http://us.qumranet.com/videos/Qumranet.wmv), they claim the SPICE protocol can do a four head display, support bi-directional audio, and full screen HD video on one of the displays... at local computer speeds. Of course that is over a LAN but one would hope that it work well over WAN sans multimedia. The SPICE protocol is patented but hopefully given Red Hat's track record they'll open up the protocol with a reasonable license? SPICE is being used as a delivery method for KVM-based virtual desktops but surely it could be decoupled and made into a general purpose remote desktop type client/server scenario.