|
|
Subscribe / Log in / New account

Scenes from the Real Time Linux Workshop

LWN.net needs you!

Without subscribers, LWN would simply not exist. Please consider signing up for a subscription and helping to keep LWN publishing

By Jonathan Corbet
October 5, 2009
The 11th Real Time Linux Workshop was held in Dresden, Germany, at the end of September; it was attended by some 200 researchers and developers working in that area. RTLWS was a well-organized event, with engaged participants, interesting topics, and more than adequate amounts of German [Conference speakers] beer. This article will be concerned with three sessions from that event; other topics (deadline schedulers in particular) will be looked at separately.

Real time or real fast?

There is a certain amount of confusion surrounding realtime systems; most commonly, people think that it is concerned with speed. The real focus of realtime computing, though, is determinism: the fastest possible response is far less important than knowing that the system will respond within a bounded time period. In fact, realtime is often at odds with speed, especially if speed is measured in system throughput; this conflict was driven home by Paul McKenney's talk titled "Real fast or real time: how to choose." Paul concluded that one should choose the "real fast" option in a number of situations, including those where throughput is the primary consideration, virtualization is in use, or hard deadlines are not present. In other words, if realtime response is not needed, a realtime kernel should not be used - not a particularly surprising conclusion.

Interestingly, though, the "real fast" option may sometimes be best in hard-deadline situations as well. In particular, if the amount of processing which must be done within the deadline is large enough, the performance costs associated with hard realtime systems may become more of an impediment to getting the work done in time than the non-deterministic nature of general-purpose systems. The number Paul put out was 20ms; if the system must do more computing than that within each deadline cycle, it is likely to perform better on "real fast" machines. In other words, after 20ms of computation, a throughput-optimized system will have caught up enough to make up for any extra latency which might delay the start of that computation.

See Paul's paper [PDF] for more details.

Non-deterministic hardware

Determinism is generally seen as a software issue; it is expected that hardware always behaves in a consistent way. Some research [PDF] presented by Peter Okech, though, makes it clear that contemporary hardware is not as deterministic as one might think. Today's computers incorporate a great deal of complexity from many sources: multiple processors, multiple levels of caching, long instruction-processing pipelines, instruction reordering, branch prediction, system management interrupts, etc. From complexity, says Peter, comes randomness. As a demonstration of this fact, his group did extensive timings of simple instruction sequences; even after long "warmup" cycles and with interrupts disabled, these sequences never did reach a point where they would execute in a constant or predictable time.

For added fun, Peter's group coded a random number generator based on hardware non-determinism. The resulting random number sequences were then subjected to all of the tests they could come up with, from basic mean-calculation and compression tests through to full entropy computation. The results came out the same each time: instruction timings on contemporary systems are truly random. There is no real need to buy special-purpose hardware for random number generation; we are already running on such hardware. Needless to say, there are implications for anybody looking for strict determinism from their systems, especially on very small time scales.

Developers and academics

The closing event of the conference was a panel session on the disconnect between academia and the development community; the panelists were James H. Anderson, Thomas Gleixner, Hermann Härtig, Jan Kiszka, Doug Niehaus, Ismael Ripoll, and Peter Zijlstra. The problem statement asked: why are there dozens of papers on deadline schedulers, but no implementation in Linux? How can somebody get a computer science degree without learning about the problems posed by multicore processors? The actual discussion was relatively unstructured, involving numerous members of the audience, and it did not answer those specific questions. But it was interesting nonetheless.

The session opened with an invitation to the panelists to make wishes, with no real concern for practicality. Developers and academics both wished that professors could receive recognition and credit for patches which get merged into an upstream project. The current system rewards the publication of papers while ignoring practical contributions (including little details like teaching) altogether. Without an incentive to get their work upstream, researchers tend to stop working once their research reaches a publishable state.

It was noted that in some companies (Siemens was cited), employees get credit for accepted patches in much the same way they get credit for more traditional publications.

Another wish which was well received on both sides was the idea that developers and researchers should attend each others' conferences. The two groups tend to speak very different languages; for example, academics talk about "deadlines" (a set period after which the work must be done) while developers worry about "latency" (how long it takes the system to respond to an event). Given fundamental concepts that differ in this way, it is not entirely surprising that the two groups do not always communicate well. Going over to the other side and being immersed in the concerns and language found there would be helpful for everybody working in this field.

Developers asked for the publication of papers which are more easily read on their own. It is hard for busy developers to make time to read academic papers; if they have to go look up a dozen other papers to make sense of one, they are likely to just give up. The publication of more survey papers was suggested as one way to help in this area. Another was to read recent dissertations, which tend to start with relatively complete summaries of the current state of academic understanding. The hosting of summary tutorials at conferences was also suggested.

There was a request from academia for more example problems and tasks that students could take on. Also requested was an easier way to hook research code into the kernel and play with it. That might make it easier for academics to push code upstream, but not all developers are convinced that's a good idea. Instead, they say, it may be better if academics remain focused on long-term problems, with the development community adapting the best ideas for implementation and upstream merging.

The best thing that could happen would be that Linus Torvalds suddenly falls in love with microkernels. If one gives academics the green light to be impractical, they will rarely miss the opportunity. So, it was suggested, the best thing that could happen would be that Linus Torvalds suddenly falls in love with microkernels. Thomas Gleixner could then become the maintainer of the L4 microkernel system. The underlying motivation here was not just that academics still think microkernels are better (many certainly do); it's also the simple fact that the Linux kernel has become so complex that it's getting hard for researchers to play with.

There was some lamentation that the academic community is not really producing students who are able to work with the development community. They don't know how to get code upstream. Increasingly, it seems, they don't really even know how to program - especially at the operating systems level. The academic system was charged with churning out armies of Java programmers who have little understanding of how computers actually work and have no clue of the costs of things. The result is that they go forth and create no end of highly bloated systems. The really good developers, it was claimed, tend to come from an electrical engineering background - though the prevalence of hardware engineers who churn out bad code was also noted.

Some universities have experimented with "real-world programming" courses. One of the things they have found is that registrations tend to be low - there is not a great deal of interest in taking that kind of class. There was also some special criticism directed toward the "Bologna process," which is trying to harmonize educational offerings across Europe. That process calls for reducing the standard undergraduate program to three years, which is not at all sufficient to teach people what they really need to know.

A suggestion for students who are interested in learning community development was to simply start with mailing list archives and spend some time watching how things are done. Then dive in. The community is making a real effort to avoid flaming people to a crisp these days, so jumping in is safer than it once was. But, in the end, people join the development community because they are interested in doing so; offering netiquette lessons is unlikely to inspire more of them. There are very few students who have the interest and the ability to become competent system-level programmers. It has always been that way; things have not really changed in that regard.

Internships at open source companies were suggested as a way to build both interest and experience. Such internships exist at a number of companies, though they tend to be fairly severely limited in number. What does exist, though, is the Google Summer of Code program, which is, for all practical purposes, an internship program on a massive scale. The problem here is that the kernel and realtime communities are not really organized in a way that lets them sign up to mentor summer of code students - this problem should certainly be solvable.

But none of that will help if students do not want to learn to do real development in the community. As strange as it seems, it appears to not be an entirely attractive profession. It takes years of work to become a competent engineer; many are simply unwilling to put in that time. Whether things have gotten worse because people expect instant gratification now, or whether it has always been this way was a matter of debate. One panelist suggested that things will only get better when good engineers make more money than good lawyers.

Another complaint was that universities have a certain tendency to actively block free software users. Some use proprietary virtual private network technology which is not available to Linux users. Homework submission sites which only work with Internet Explorer were also mentioned.

The session ended with little in the way of specific action items, but there was one: researchers requested a means by which they could easily experiment with new scheduling algorithms in the kernel. It was agreed that some sort of pluggable schedule technology would be added to the realtime tree, which has long served as a sort of playground for interesting new approaches. A pluggable scheduler seems unlikely to make it upstream, but presence in the realtime tree should make it sufficiently available for researchers to make use of.

The conference adjourned with the announcement of the venue for next year's event. The Real Time Linux Workshop has tended to move around more than most conferences; past events have been held all over Europe as well as China, Mexico, and the US. The 2010 Workshop will continue that practice by moving to Nairobi, Kenya, in the latter part of October. That should be an interesting place to discuss what's happening in the rapidly developing realtime Linux area.

Index entries for this article
KernelRealtime
ConferenceRealtime Linux Workshop/2009


(Log in to post comments)

Scenes from the Real Time Linux Workshop

Posted Oct 5, 2009 18:48 UTC (Mon) by tnoo (subscriber, #20427) [Link]

> The 11th Real Time Linux Workshop was held in Dresden, Germany, at the
> end of December

Had to read this twice until I figured out how much ahead of time LWN
really is.

best, tnoo

Scenes from the Real Time Linux Workshop

Posted Oct 5, 2009 18:51 UTC (Mon) by zeekec (subscriber, #2414) [Link]

I was going to ask for a ride on his time machine.

Scenes from the Real Time Linux Workshop

Posted Oct 5, 2009 20:28 UTC (Mon) by ballombe (subscriber, #9523) [Link]

If your laptop run a real time kernel, does that make it a real time machine ?

Ahead of the game

Posted Oct 5, 2009 18:55 UTC (Mon) by corbet (editor, #1) [Link]

Sheesh...we went over this article a couple of times, and nobody caught that. I blame jet lag. Fixed.

Scenes from the Real Time Linux Workshop

Posted Oct 6, 2009 3:19 UTC (Tue) by mmcgrath (guest, #44906) [Link]

Maybe he thought it was a preemptive workshop?

Scenes from the Real Time Linux Workshop

Posted Oct 6, 2009 2:46 UTC (Tue) by Viddy (guest, #33288) [Link]

I'm doing the a Operating Systems postgraduate course this semester at a NZ University, and
the class at the beginning of the semester had three choices on
what to study, a lecture/assignments/exam or a lecture/big assignments/no exam
using Linux, or a lecture/big assignments option in Minix.
The class overwhelmingly chose to go down the Minix path, ostensibly as it would be simpler
and cover more topics, rather than the few specific areas in the Linux kernel, which some
of the class voiced as being "complicated".
The flip side of this is that the lecturer has more than made up for the "uncomplicated"
nature of Minix with large volumes of readings and difficult assignments.
Universities differ (In my opinion) to prescriptive technical institutions by teaching their
students a framework of how a system works, rather than how a specific instance of a system
works.
In the instance of the paper I did, has this translated to a framework for getting to an
understanding how other kernels work? I think it may have - things like syscalls and process
scheduling now make a lot more sense than they used to.

On the quote about microkernels, I had to revise some assumptions I had about the message
passing methodology being a hideous waste of resources after reading the barrelfish paper at
http://barrelfish.org/barrelfish_sosp09.pdf after seeing the benchmarks at the end of the
paper. Short version seems to be that on large multicore systems, ccNUMA doesn't scale, and
specific message passing between cores and sockets seems to be a good idea, though I'm not
quite clear how exactly the kernel running on each core passes messages around inside the
core.

My observations on university classes that I've attended are that there are some very good
students, who at postgrad level are finally being stretched and will end up doing great
things either in the kernel or elsewhere, there are a few good students who will also do
good works, and there rest will probably go on to work with the languages that they've been
using at Uni.
I'm not terribly sure that this has changed much from when I finished my undergraduate
degree in Chemistry 6 or so years ago (I should know, I was one of the crap ones back
then), and I wonder if comments to the contrary from the audience are another form of "Back
in my day"?

Scenes from the Real Time Linux Workshop

Posted Oct 6, 2009 13:17 UTC (Tue) by drag (guest, #31333) [Link]

I don't know how much it applies to your situation, but I've come across
the realization that education is vastly different from training,
especially when it comes to computer science.

Many, it seems, enter into a university CS degree program with the
intention that it is going to give them qualifications to get a job later
on.. and many employers expect CS degrees as a way to find qualified people
to work on their corporate applications.

All of this is really really bad.

Most people would probably be much better served by having a vocational
training school were they learn to program and deal with real-world
situations using existing technology and concepts. Learn some CS, to be
true, but mostly just concentrate on understanding existing systems and
practical application of programming in the real world situation.

I don't think it is too much about "good" vs "bad" students, its just about
what people actually want through their education.. if people are not
really interested in CS as a discipline, but are interested in programing
as a career then there should be a acceptable and employer-friendly outlet
for people like that.

Were as people that are more interested in research and mathematics and are
passionate about pushing technology and that sort of thing should still
have CS to throw themselves at.

Its like you don't need to have a degree in physics to be a excellent
mechanic or automobile engineer. You need to know some physics, chemistry,
and metalergy; but it is more about finding creative ways to apply known
concepts and practices rather then pushing the boundaries of knowledge.
Just like to be a physicist you don't need to know how to weld or
understand the proper techniques to aid in the creation of crystals in the
form of grain in the metal that will either increase the strength and
flexibility or increase stiffness at the expense of brittleness based on
the application of heat and coolness to the metal prior and after wielding
it.

--------------------------

Its like the Minix vs Linux thing. Real-world software systems are very
complex. Unless you have a way to deal and work within that complexity then
your solutions are worthless to people that care about making software
people can use.

Scenes from the Real Time Linux Workshop

Posted Oct 6, 2009 23:56 UTC (Tue) by PaulWay (guest, #45600) [Link]

> Short version seems to be that on large multicore systems, ccNUMA doesn't scale, and specific message passing between cores and sockets seems to be a good idea, though I'm not quite clear how exactly the kernel running on each core passes messages around inside the core.

I think the idea is that the message passing code deals with the different interfaces much like we deal with network interfaces - they use different signalling mechanisms and throughput logic for each interface. If one kernel wants to talk to a core on the same chip, it uses a different (optimised) interface to that used for talking to the other chip on the same motherboard, or on the GPU.

I'm glad you showed me the barrelfish paper, because it gave me an insight. If we're using 'network' interfaces to talk to each kernel, then we can use actual network interfaces as well. Why not boot up another machine and have its kernels 'join' your machine's kernels across the network? Sure, you've got much larger latencies, no shared memory and different failure modes; but it's just another interface, these concepts are already well understood. You can also have processors joining and leaving different kernels on an as-needs basis.

It brings the realm of ubiquitous, scalable computing that bit closer.

Have fun,

Paul

Who's in the picture

Posted Oct 6, 2009 4:01 UTC (Tue) by nevets (subscriber, #11875) [Link]

Just because it is not stated who is in that picture, I'll list them:

From left to right.

Front row:
Peter Zijlstra, Sven Dietrich, John Kacur, Darren Hart, Steven Rostedt

Back row:
Jan Blunck, Clark Williams, Paul McKenney, Thomas Gleixner, Jon Corbet

Random instruction timings

Posted Oct 6, 2009 5:49 UTC (Tue) by kleptog (subscriber, #1183) [Link]

The note about instruction timings being random rings true to me. A while back I created a small kernel driver that created random bits by taking the last few bits of a high-resolution timer whenever the timer interrupt went off.

I tried some of the standard toolkits for measuring randomness and they all concluded the data was totally random. The basic conclusion was that you could easily generate hundreds of random bits per second without too much effort. This would be useful for machines currently suffering from loss of randomness because they're headless.

It also seems a much better option than extracting randomness from network traffic, since while an attacker might be able to affect the network card, there's no way they're going to be able to affect the system timer.

Random instruction timings

Posted Oct 6, 2009 9:06 UTC (Tue) by dlang (guest, #313) [Link]

this brings into question the tactic of checking timing to see what commands were executed by other processes in order to guess encryption keys. (along with things like measuring power consumption, etc as well)

you may be able to do this on simple CPUs used for embedded applications, but this makes it pretty clear that this path of attack is not that big a risk on current x86 cpus

Random instruction timings

Posted Oct 8, 2009 14:01 UTC (Thu) by jzbiciak (guest, #5246) [Link]

The magnitude of an external cache miss is still high enough that I imagine you could see its signature even over the LS-bit noise in the TSC. It may take several runs to correlate things well enough, though, so an isolated encryption event may still hide in the noise.

Scenes from the Real Time Linux Workshop

Posted Oct 7, 2009 3:17 UTC (Wed) by dicej (guest, #36115) [Link]

The lament about the "armies of Java programmers who have little understanding of how computers actually work" makes me wonder if part of the problem is a terminology barrier similar to the one separating academics and developers. A systems programmer might be dismayed that a "clueless" application programmer doesn't know what RCU stands for, whereas the latter can't believe the "outmoded" systems programmer doesn't know what MVCC stands for. Yet if they dig a little deeper, they'll find they've been talking about the same thing applied to different domains.

Scenes from the Real Time Linux Workshop

Posted Oct 7, 2009 8:57 UTC (Wed) by farnz (subscriber, #17727) [Link]

I understand where you're coming from, but it doesn't match my experience of the "army of Java programmers". Using RCU as an example:

With a good Java programmer, I can discuss concepts like RCU, and I just have to remember that if we're using code to clarify, I must stick to Java syntax and semantics; not ideal, as my Java is very, very rusty, but we can communicate and both learn from the experience. Heck, I can even move right out of their comfort zone into things like higher order functional programming, and so long as I explain things clearly, they'll keep up.

With a member of the "army", I'm stuck. Instead of being asked to explain why RCU's interesting, what it does for you, and how it compares to other solutions to the same problem, I get asked questions like "who cares? synchronized solves all that for you". I get an even worse reaction if I start to discuss things that aren't yet part of Java; "why would you even care about that?".

My personal guess is that Java is the language that currently attracts people who don't like programming, but do it because it's the best paid of the jobs they can do. When the money moves to a different programming language, they'll follow.

Scenes from the Real Time Linux Workshop

Posted Oct 7, 2009 14:09 UTC (Wed) by nix (subscriber, #2304) [Link]

Quite so. A lot of these people don't even have any clue what
computational complexity is, and have no desire to learn: and without
knowing about *that* you have no hope of writing efficient or scalable
code. (Other things are necessary, but big-O notation and all that it
implies are essential.)

Scenes from the Real Time Linux Workshop

Posted Oct 8, 2009 14:53 UTC (Thu) by jzbiciak (guest, #5246) [Link]

I think you just described the difference between what I would call a "coder", and what I would call a "software engineer."

The member of the "army" is what I would call a coder. Someone who can output code. That's it. Give them a well defined spec and a list of things to go do, and they'll go do it. Best to leave the system architecture (the "why" and higher levels of "how") to someone who understands things at a deeper level. Give the coder good libraries and tools that make it easy to rely on those libraries so that they don't run afoul of big-Oh problems too often. Give them solid coding guidelines and they'll produce something that works more often than not. Hand them a working system that has a few bugs along with the bug reports, and they'll probably manage to fix the bugs, or at least mitigate them.

Your first example are the folks you want working on the overall system and its architecture, and on writing the really tough bits. You also want them looking over the coder's shoulders from time to time to make sure they're not too far off the course. :-) These sorts of guys also ought to be designing and implementing the libraries that the coders rely on, so that even if the glue between the components is dodgy, at least the components themselves are solid.

I see it as the difference between a mechanic and a mechanical engineer. A basic mechanic can follow the service manual and keep your car maintained and on the road, although a complex problem with your car might perplex them. A more experienced mechanic can even work through some of the more complex problems and do work on custom modifications. They know how to take a car completely apart and put it back together. But, you wouldn't ask them to build a car from scratch, starting with a blank piece of paper.

A mechanical engineer though works at a different level, figuring out how all the pieces need to work so that they can do what they do. They're working with different alloys and metals, deciding whether the engine will have an aluminum block or a cast iron block. They're working out the intake and exhaust paths to meet their design targets. They're working out the control programs that run on the engine controller to control spark timing, air/fuel ratios under various loading conditions, and balancing power output vs. engine life. And so on. These folks are also likely to find basic auto maintenance extremely tedious, and may not even be all that great at it because their heart's not in it. But, they can do it if they need to.

Scenes from the Real Time Linux Workshop

Posted Oct 8, 2009 15:06 UTC (Thu) by farnz (subscriber, #17727) [Link]

You're missing one detail about the "army" as against a coder; you can trust your coders to follow the spec and do the list of things roughly as you asked them to. The "army" is people who have the skill level for a coder, but believe that they're software engineers.

So, your coder will follow spec. Your "army" member will ignore the spec because what he's doing is "just as good, but easier to implement", resulting in your carefully written spec to handle 10,000 users on that bit of kit only handling 10, thanks to the changes you've encountered.

Taking your analogy to mechanics further, the "army" members are basic mechanics who genuinely believe that they're as good as or better than mechanical engineers; in particular, if a mechanical engineer were to specify a particular type of oil for a new engine, the "army" member would happily substitute 10W30, because "it's all oil, anyway". They'll move your carefully chosen spark timings and air-fuel ratios, because "everyone knows that you get more power this way". They'll change your engine to output twice the power, then complain when it lasts one year, not 20.

In short, they bodge, they apply myths, and they don't understand, but they get very upset if you dare suggest that they're a mechanic, not a mechanical engineer.

Scenes from the Real Time Linux Workshop

Posted Oct 8, 2009 21:15 UTC (Thu) by marcH (subscriber, #57642) [Link]

> Taking your analogy to mechanics further, the "army" members are basic mechanics who genuinely believe that they're as good as or better than mechanical engineers; in particular, [...] they'll change your engine to output twice the power, then complain when it lasts one year, not 20.

Just like any other analogy, this analogy is not perfect. It breaks here: the wannabe mechanical engineer is fortunately limited by its tools and physical laws of nature, whereas the wanabee software engineer can wreak havoc without any practical limit.

Oh the army

Posted Oct 10, 2009 20:45 UTC (Sat) by man_ls (guest, #15091) [Link]

It gets worse IME. You cannot even try to explain concepts like database locking, transactionality, threading or concurrency to army members, because you will meet their blank stares. It is worse because, even if farnz already said above that the standard answer is "who cares, synchronized takes care of that" (which it of course doesn't), database locking and transactionality are essential in most modern Java code. I've seen pools with only one object, TCP/IP connections that are tracked incorrectly (and which often break), deadlocks in database code... all of which were designed by an army member and later had to be corrected by a proper engineer.

Hey, I may not grasp a lot of the concepts in engineering, but at least I try to listen when people explain them to me.

Oh the army

Posted Oct 10, 2009 21:21 UTC (Sat) by nix (subscriber, #2304) [Link]

"synchronized takes care of that" is what the smart ones say. The dumb
ones say "I can do threads" when they mean "I have used $FRAMEWORK that
hides all the threading and multiprocessing issues from me. I used it for
six months, five years ago."

(I *wish* I was exaggerating. I really do.)

Scenes from the Real Time Linux Workshop

Posted Oct 8, 2009 11:39 UTC (Thu) by fotoba (subscriber, #61150) [Link]

this article is very good but too short. Iwas there and I had three presentiaons form practical point of wiev. I was discused problem in education of Linux programmers for industry before panel discusion and result was same as in panel.

So it is good to see there will be another texts about RTLWS11

Scenes from the Real Time Linux Workshop

Posted Oct 8, 2009 14:54 UTC (Thu) by marcH (subscriber, #57642) [Link]

> The academic system was charged with churning out armies of Java programmers who have little understanding of how computers actually work and have no clue of the costs of things. The result is that they go forth and create no end of highly bloated systems. The really good developers, it was claimed, tend to come from an electrical engineering background - though the prevalence of hardware engineers who churn out bad code was also noted.

In other words, to be a really good software engineer your skills need to span across a wide range of abstraction levels.

Surprised?

Scenes from the Real Time Linux Workshop

Posted Oct 8, 2009 16:34 UTC (Thu) by Tara_Li (guest, #26706) [Link]

What effect does the indeterminancy of instruction timings have on the ability to debug a program? All of a sudden, it seems to me that some of the heisenbugs out there aren't actually software bugs, or bugs from improper inputs, but actually a result of the hardware not doing what it's supposed to!

At some level, this certainly seems like it makes a RT Kernel (or anything RT) absolutely impossible. If all of the bad things that *could* happen to slow down a code path, happen at the same time, you're just boned.

Absolutely not.

Posted Oct 8, 2009 20:39 UTC (Thu) by khim (subscriber, #9252) [Link]

At some level, this certainly seems like it makes a RT Kernel (or anything RT) absolutely impossible. If all of the bad things that *could* happen to slow down a code path, happen at the same time, you're just boned.

Not really. All timings have upper limit. But for the contemporary system "highest possible" response time is far, far away from "typical time". By the factor 100 or so. If need hard realtime then you need to pay HUGE sums and can reduce it to 10 or so. Beyond that... you are stuck.

Absolutely not.

Posted Oct 9, 2009 3:51 UTC (Fri) by magnus (subscriber, #34778) [Link]

If need hard realtime then you need to pay HUGE sums and can reduce it to 10 or so. Beyond that... you are stuck.
In some cases you can add a dedicated FPGA or microcontroller to handle the realtime stuff and leave the computer to do less timing-critical (but more complex) work. If the problem can be split this way it's usually the simplest solution (IMO).

Scenes from the Real Time Linux Workshop

Posted Nov 28, 2009 18:18 UTC (Sat) by abadidea (guest, #62082) [Link]

I know I'm a bit late, but--

re: How can somebody get a degree in CS without learning about multicores?

Simple. I'm currently in my senior year of my CS degree, and three of my
four professors last had industry experience more than twenty years ago,
and the fourth never had any industry experience at all. They know
algorithms and theory just fine, but all of them are fundamentally lacking
in the ability to keep up with the times. Ajax? Ubuntu? Wait, Windows
supports more than one person having an account?!?! (I kid you not.) In our
Operating Systems class (which, btw, was purely theory), we spent six
lectures on locking and threading without ONCE mentioning the fact that
there may be more than one processor or core. When I asked the professor
about it, he dismissed it as "not common enough to worry about." This was
about six months ago.

Thank God this is a C++ school rather than Java, so at least my classmates
know a bit about memory and whatnot...


Copyright © 2009, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds