The RISKS Digest
Volume 24 Issue 17

Monday, 27th February 2006

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

On learning from accidents
Don Norman
Comparative Crash Management: OMX and TSE
Colin Brayton
A Malfeasant Design for Lawful Interception
Diomidis Spinellis
Active Content: Bad idea. Bad.
Rob Slade
Even security companies get the blues
Jeremy Epstein
Student records left exposed after computer glitch
Andrew King
325,000 Names on Terrorism List
Daz
Robert Alberti
Behind the smoke screen of Internet and International Infrastructure
Gadi Evron
The risks of using cell phones while driving
Nico Chart
Some risks can be good for you, Re: redacting
Richard Karpinski
BOOK: Schumacher et al: Security Patterns: Integrating Security and System Engineering
PGN
REVIEW: "Role-Based Access Control", Ferraiolo/Kuhn/Chandramouli
Rob Slade
Info on RISKS (comp.risks)

On learning from accidents

<"Don Norman" <don@jnd.org>>
Fri, 17 Feb 2006 23:11:05 -0800

Once again we have had an accident, and once again, a well-meaning RISKSer
cries out "human error" (RISKS-24.16). The airplane didn't stop in time,
went off the end of the runway, collided with two cars and killed one of the
car's passengers.

"Human error by the pilot, not failure of an "Automatic" system" says our
faultless correspondent.

Bad pilot, says our correspondent, bad. Or, to quote the precise words,
"Perhaps in hindsight the captain honestly believes he "attempted" to deploy
the Thrust-Reversers ... or maybe he's now ... trying to cover his error by
an alleged system malfunction ??".  I bet he feels good, our correspondent.
He has shown the world that the pilot was at fault, and now he can rest
easy. No failure here — just a bad pilot.

Sigh. You know, some 75% of accidents are blamed on human error. Personally,
I think that figure wrong: I think those other 25% are error as well. After
all, if a part failed by metal fatigue, the designer failed in not
considering the possibility of fatigue. Even so-called "Acts of God" are
quite predictable, at least in the statistical sense. So every failure is
caused by a human somewhere along the chain. Does blaming people help us
stop accidents from happening?  Nope.

But, it makes people feel good to blame someone. Then they don't have think
about the problem anymore, or at least not until the next time it happens.
And it will, it will. Perhaps in a different form, but human error will be
the culprit next time as well.

It does no good to blame something on human error. That's like saying that
the communication failed because of atmospheric noise, or perhaps a
component failure. What do we do when we find these physical failures? Why
we devise redundant circuits, error-resistant codes, noise-tolerant
communication systems.  We don't blame physics and then relax. No, we do
something about it.

So, why not design things so that it can tolerate the well-known forms of
human error?

Case in point: the Southwest Airlines crash in Chicago, where the plane
didn't manage to stop in time on a wet runway. I always want to await the
final NTSB report before reaching any conclusions, but given that this has
been so badly discussed in RISKS (properly and well discussed by Peter
Ladkin, I hasten to add, badly discussed by the second correspondent), let's
see how we might have designed things differently.

The suspected culprit at this moment is that although the pilots computed
that they had sufficient runway, the computation assumed timely deployment
of the engine thrust reversers. In fact, the thrust reversers were deployed
18 seconds after touchdown. Too late — that didn't give them sufficient
time to stop the plane.

I have seen this problem before: overly precise computations produce more
trust than is warranted. In one famous incident, a large cruise ship (Royal
Majesty) went aground causing $7 million of damage, to a large extent
because when its GPS system failed, a dead-reckoning system took over, but
still produced results accurate to two decimal places of minutes. After 30
hours of dead reckoning, the ship was 17 miles off course, but the position
was still being plotted with extreme precision. Human error? Of course,
although the NTSB did its usual excellent job of showing that the real
culprit was the entire system, including the design of the integrated bridge
navigation system.

In the Southwest Airlines incident, why did the runway-length calculator
give a precise answer when the variables entered into it were so imprecise:
how wet really was the runway? Just how far along the runway will the plane
actually touch down? What will be its exact speed at touchdown, its exact
weight? How many seconds will it take the thrust reverser to be deployed
(certainly not immediately — 5 seconds? 10? 18?).

I propose a design rule: never give an answer with more precision than is
warranted. Ideally show locations on a map as a smudge, the size comparable
to the statistical likelihood. Why produce an exact stopping distance in
feet? Why not produce a range, from one with everything working perfectly to
one where, say, thrust reversers would not work at all, and all the other
parameters were at their extreme worst ranges.  Instead of displaying an
exact position in hundredths of minutes or stopping distance to the foot (30
cm.), why not always show the ranges to be expected?

I don't know if this solution would have prevented this particular incident.
But I do know that this philosophy can be applied to a large range of
situations where today calculations are done with great precision and
questionable accuracy. More importantly, the real, underlying design rule
should be to learn from mistakes. To change the procedures and technology so
as to mitigate against reoccurrences. Blaming the human solves no useful
purpose except to make the blamer feel self-righteous. Let those who are
without error cast the first blame. That would lead to zero casting.

I have tried to deliver this message many times before. I predict that I
will have to give it many times again. Wouldn't it be nice if before I died,
I would find it no longer necessary to deliver the message.

Don Norman, Nielsen Norman Group & Prof. EE & Computer Science, Cognitive
Science & Psychology, Northwestern University http://www.jnd.org

  [The RISKS archives themselves suggest that Don will have to continue
  this long-time consistent thread.  PGN]


Comparative Crash Management: OMX and TSE

<"Colin Brayton" <cbrayton@gmail.com>>
Mon, 20 Feb 2006 12:14:36 -0500

I have been closely following the difficulties the Tokyo Stock Exchange has
been having with its trading and order management systems. As you may
remember, a series of "fat finger" trades revealed that the system, which
was, as it turns out, apparently in the process of being swapped out, was
unable to cancel an erroneous order, causing, in the most egregious case, a
$333 million loss to the broker whose trade was input incorrectly. The
software provider, Fujitsu, took the blame, and eventually a new CIO from
NTT was brought in on a platform of "international best practice" for the
exchange's systems.

So when the OMX, operator of a group of Scandinavian and Baltic exchanges,
had what sounded like a similar problem at its Stockholm facility, I watched
to see how they would handle the situation.  I blogged my observations here:
http://blogalization.nu/marketmachines/?p=3D1307 ...

The gist: All trades possibly affected by the order management fubar were
automatically routed aside for manual confirmation or cancellation. The
process took one hour, after which normal trading resumed.

I'm not up on the full details, but it does seem to illustrate an important
point: glitches happen. What's most important is how you plan for handling
them so that they don't snowball from minor glitches into loud screaming
from senior government officials

Colin Brayton, Brooklyn  cbrayton@gmail.com


A Malfeasant Design for Lawful Interception

<Diomidis Spinellis <dds@aueb.gr>>
Sun, 19 Feb 2006 19:06:25 +0200

Earlier this month it was revealed that more than 100 mobile phone numbers
belonging mostly to members of the Greek government and top-ranking civil
servants were found to have been illegally tapped for a period of at least
one year [1]. Apparently, the tapping was implemented by activating
Ericsson's lawful interception subsystem installed at the Vodafone service
provider. How could this happen?

After one looks at the design and implementation of Ericsson's Interception
Management System (IMS), the real question that comes to mind is how come
such events are not happening all them time (or maybe they are?)  The system
is clearly not designed with security in mind.

The major problem of the design is the lack of compartmentalization. IMS is
an extremely sensitive application, because it can setup and monitor the
tapping of arbitrary phones. Good security engineering practice dictates
that such applications should run isolated on trustworthy platforms,
minimizing the surface area exposed to malicious attacks. In such a design
the system's modules serve the same role as a ship's bulkheads: they provide
structural stability and contain damage to specific areas.

Instead, according to its user manual [2], IMS runs on top of Ericsson's
general purpose AXE exchange network management platform XMATE, which in
turn runs on top of a Solaris system chock-full of support software.  Among
other things, XMATE provides an application programming interface, a command
terminal, a macro command tool, and a file transfer application. Any of
those could be conceivably exploited to activate the IMS or its
functionality. In addition, the XMATE Solaris installation includes many
large third party applications: the Common Desktop Environment (CDE), the
Applix business performance management software [3], X.25 networking, and
the OSI file transfer (FTAM). Again, security vulnerabilities in these large
components could be used to seize control of the system and activate the
IMS.

Even if the IMS was not installed on the network management platform, the
design of the platform apparently allows a malicious user to craft the
"remote control equipment" MML commands that set up voice communication
monitoring and send them to the exchange.

In a recent thought-provoking article Matt Blaze identified a number of
signaling vulnerabilities in (mainly) older wiretapping systems [4].
Vulnerabilities associated with the way modern systems are designed and
implemented are apparently also very important.

Disclaimer: The above is my limited understanding, based on the few
documents that are publicly available.  Unfortunately, documentation that
would allow independent experts to assess the security of these systems is
scarce.  The IMS User Manual [1], although available on a number of Internet
sites, is marked with red letters as "Strictly Confidential".  (I guess
simply "Confidential" would mean that the manual was available for download
from Ericsson.)  Also, the ETSI standard TR 101 943 V2.1.1 (2004-10) states
in section 7.3.2: "It is also to be recommended that operational information
about the LI systems, such as how they are implemented, where they reside
and how they are operated and maintained, should be kept within a small
group of authorized persons."  Another instance where obscurity is probably
used as a cover for insecurity.

[1] http://en.wikipedia.org/wiki/Greek_telephone_tapping_case_2004-2005
[2] http://cryptome.org/ericsson-ims.htm
[3] http://www.applix.com/index.asp
[4] http://www.crypto.com/papers/wiretapping/

Diomidis Spinellis - http://www.dmst.aueb.gr/dds


Active Content: Bad idea. Bad.

<Rob Slade <rMslade@shaw.ca>>
Wed, 22 Feb 2006 14:04:13 -0800

Sorry, but if I've learned anything in almost 20 years of malware research,
it's that active content can lead to trouble.

(And JavaScript is definitely *not* my language of choice for security
purposes.)

  Active cookies aim to thwart cyber crooks. A new technique to protect
  users against more sophisticated forms of cybercrime has been developed by
  Indiana University School of Informatics and affiliated start-up
  RavenWhite. The "active cookie" can be used as a countermeasure against
  online scams such as pharming and man-in-the-middle attacks. "There are no
  reliable commercial tools currently available to protect users from such
  attacks," said Jakobsson of the IU Center for Applied Cybersecurity
  Research. "We believe that active cookies can provide such protection."
  Active cookies are a "piece of cached and sandboxed executable code, such
  as a JavaScript object, that help authenticate an Internet browser to a
  server," say the researchers. The technology is a shield against identity
  theft and cyber attacks that can protect against pharming attacks as well
  as techniques used to hijack Wi-Fi connections or modify consumer router
  settings. Limitations include limited persistence and a lack of support
  for roaming users. "And they don't offer security against strong attacks
  like active corruption of routers on the client-server path, as holistic
  cryptographic solutions can." Active cookies may be attractive to
  financial institutions — they complement existing techniques for user
  authentication, are easy to use, and don't have the potential security
  implications associated with browser plug ins.

  [Source: Channel Register (UK), 21 Feb]
  http://www.channelregister.co.uk/2006/02/21/active_cookie/

rslade@computercrime.org  slade@victoria.tc.ca  rslade@sun.soci.niu.edu


Even security companies get the blues

<Jeremy Epstein <jeremy.epstein@webmethods.com>>
Fri, 24 Feb 2006 06:20:22 -0800

It was widely reported that the names, SSNs, and other personal information
for 6000 current & former McAfee employees were potentially compromised.  An
auditor from Deloitte had the information on an unlabeled (but unencrypted)
CD that was left in an airplane seatback pocket.  It's unknown whether the
CD simply went in the trash as part of airplane cleaning, or whether someone
picked it up.  McAfee is offering employees and ex-employees two years worth
of credit monitoring through Experian.

The really interesting part (which I saw in the *San Jose Mercury* article,
but not elsewhere) is that the auditor "had made the CD for backup purposes,
and it was their decision not to encrypt the data."  McAfee's spokesperson
aid they have policies to prevent such actions, but they can't control what
the auditor does with the data.

So if McAfee didn't have policies in place to prevent storing sensitive data
in an unencrypted form (and/or to safeguard the media), Deloitte would have
flunked them on their Sarbanes-Oxley audit.  But because it was Deloitte who
did the dirty deed, it looks like no one will be held accountable.  One
hopes that the Deloitte employee who made the CD is now a former employee.

Some of the news reports are commenting on the irony of it being McAfee (a
security company) that lost the data; the more I think about it, the less I
think that's relevant, since the fault appears to lie with Deloitte, not
McAfee.

http://www.mercurynews.com/mld/mercurynews/13950371.htm&cid=0
http://news.com.com/Auditor+loses+McAfee+employee+data/2100-1029_3-6042544.html

P.S. I'm an ex-McAfee (actually, Network Associates) employee and did not
get notified, presumably because I moved after I stopped working for them.
I've been trying to find someone at McAfee to tell me whether my information
was on that CD, and if so how to get my Experian monitoring, but they make
it very hard to find a contact point.  If anyone has suggestions, I (and
many other ex-McAfee people) would appreciate it!

  [Also noted by Martyn Thomas:
http://software.silicon.com/security/0,39024655,39156741,00.htm
  PGN]


Student records left exposed after computer glitch

<"Andrew King" <ajking@iinet.net.au>>
Mon, 20 Feb 2006 16:36:12 +0930

Thousands of [AU] Canterbury University students had their personal
information exposed when online services were shut down leaving private
records available to anyone with a user code and password last night.
Information such as IRD numbers, transcripts, results, outstanding payments,
medical conditions, and personal addresses could all be easily accessed
online and could be changed by system users.  The university's information
technology department shut down the webfront.  The university had installed
a new online system late last year but there had not been any problems until
now.  [Source: *New Zealand Herald*, 20 Feb 2006; PGN-ed]
  http://www.nzherald.co.nz/section/story.cfm?c_id=1&ObjectID=10369269


325,000 Names on Terrorism List (From Dave Farber's IP)

<Daz <articles.daz@gmail.com>>
Wed, 15 Feb 2006 11:25:33 -0500

Rights Groups Say Database May Include Innocent People
By Walter Pincus and Dan Eggen, *The Washington Post*, 15 Feb 2006, A01

The National Counterterrorism Center maintains a central repository of
325,000 names of international terrorism suspects or people who
allegedly aid them, a number that has more than quadrupled since the
fall of 2003, according to counterterrorism officials.

The list kept by the National Counterterrorism Center (NCTC) — created
in 2004 to be the primary U.S. terrorism intelligence agency — contains
a far greater number of international terrorism suspects and associated
names in a single government database than has previously been
disclosed. Because the same person may appear under different spellings
or aliases, the true number of people is estimated to be more than
200,000, according to NCTC officials.

<...snip...>

<http://www.washingtonpost.com/wp-dyn/content/article/2006/02/14/AR2006021402125.html?referrer=email&referrer=email>

Archives at: http://www.interesting-people.org/archives/interesting-people/


325,000 Names on Terrorism List (via Dave Farber's IP)

<Robert Alberti <alberti@sanction.net>>
Wed, 15 Feb 2006 15:28:02 -0600

"May include?" Unless Coffin vs. the United States and the Presumption of
Innocence have been suspended, then the Terrorism List contains 325,000
"innocent" names.

Robert Alberti, Sanction, Inc., PO Box 583453, Mpls, MN 55458-3453
CISSP, ISSMP http://www.sanction.net (612) 486-5000 x211 alberti@sanction.net


Behind the smoke screen of Internet and International Infrastructure

<Gadi Evron <ge@linuxbox.org>>
Sat, 18 Feb 2006 11:17:26 +0200

In the following URL for a (quick & dirty) write-up (which is too big for
sending into RISKS) I start by discussing some recent threats network
operators should be aware of, such as recursive DNS attacks.

Also, a bit on the state of the Internet, cooperation across different
fields and how these latest threats with DDoS also relate to worms and bots,
as well as spam, phishing and the immense ROI organized crime sees.

Then I try and bring some suggestions on what can be done better, and where
we as a community, as well as specifically where us, the "secret hand-shake
clubs" of Internet security fail and succeed.

Over-secrecy, lack of cooperation, lack of public information, and not being
secret enough about what really matters.

On the surface you can read about the attacks, how registered domains with a
name created by a specific algorithm to serve as a botnet command and
control server, while spammers use name servers other than their own to
spamvertise from and switch back, while the DNS RR's change IP addresses
every few minutes.  Below the surface you will have to see what you
understand as I get different responses from different people.

Looking behind the smoke screen of the Internet: DNS recursive attacks,
spamvertised domains, phishing, botnet C&Cs, International Infrastructure
and you

The write-up can be found here:
http://blogs.securiteam.com/index.php/archives/298


The risks of using cell phones while driving

<"Nico Chart" <NicholasC@paradigmgeo.com>>
Thu, 16 Feb 2006 09:39:47 -0000

Here's a piece that will interest some RISKS readers: Cecil Adams ("The
Straight Dope") on the risks of using cell phones while driving:
http://www.straightdope.com/columns/060210.html

Nico Chart, Paradigm Geophysical

  [Cecil says. "Accumulating evidence suggests gabbing on the phone
  while driving is definitely dangerous, probably more so than other
  distractions."]


Some risks can be good for you, Re: redacting (Re: Jaffe, RISKS-24.16)

<Richard Karpinski <dick@cfcl.com>>
Thu, 16 Feb 2006 08:22:54 -0800

> What to me seems an obvious risk was not mentioned, namely the risk of
> trusting untrusted software to perform downgrade at all, ...

This discussion would not be complete without mentioning that the more
careful one wishes to be with respect to keeping something secret, the more
important it may be to ensure that it is made public. The Russian success at
bugging the American embassy in Moscow gave them confidence that we were NOT
planning imminent hostile actions and thus kept the cold war cold somewhat
more securely. Security failures in criminal and terrorist organizations are
particularly valuable even in Palestine, Afghanistan, and Iraq, whether
state sponsored or not. Sometimes, mistakes and inadequacies are good for
us.


BOOK: Security Patterns: Integrating Security and System Engineering

<"Peter G. Neumann" <neumann@csl.sri.com>>
Fri, 24 Feb 2006 9:49:32 PST

Markus Schumacher, Eduardo Fernandez-Buglioni,
  Duane Hybertson, Frank Buschmann, Peter Sommerlad
Security Patterns: Integrating Security and System Engineering
John Wiley and Sons, New York, 2006
565+xxxiii

Following Christopher Alexander's inspiration, this book purports to span
"the full spectrum of security in systems design", and addresses
enterprise-, architectural-, and user-level security.  It is the seventh
book in the Wiley Series in Software Design Patterns.  It includes lots
of RISKS-relevant material.


REVIEW: "Role-Based Access Control", Ferraiolo/Kuhn/Chandramouli

<Rob Slade <rMslade@shaw.ca>>
Mon, 30 Jan 2006 08:01:32 -0800

BKROLBAC.RVW   20051106

"Role-Based Access Control", David F. Ferraiolo/D. Richard
Kuhn/Ramaswamy Chandramouli, 2003, 1-58053-370-1
%A   David F. Ferraiolo
%A   D. Richard Kuhn
%A   Ramaswamy Chandramouli
%C   685 Canton St., Norwood, MA   02062
%D   2003
%G   1-58053-370-1
%I   Artech House/Horizon
%O   617-769-9750 800-225-9977 fax: 6177696334 artech@artech-house.com
%O  http://www.amazon.com/exec/obidos/ASIN/1580533701/robsladesinterne
  http://www.amazon.co.uk/exec/obidos/ASIN/1580533701/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1580533701/robsladesin03-20
%O   Audience a Tech 2 Writing 1 (see revfaq.htm for explanation)
%P   316 p.
%T   "Role-Based Access Control"

The original papers on role-based access control (RBAC) saw it as an
extension of mandatory access control (MAC): a given role in an organization
would have a given requirement for clearance, and therefore a particular
person in a role would have access to material labeled at a specific
sensitivity.  In the preface, the authors state that they are following
current interest in RBAC as a means of identity management, with little
distinction made between the use of discretionary or mandatory access
control policies.  The intended audiences are security professionals,
software developers, and instructors and students in security courses.

Chapter one outlines the basics of access control, moves to a history of
access control and RBAC, and ends with a justification for the use of RBAC
in the enterprise.  More details of access control concepts are provided in
chapter two, along with some repetitions of the models in chapter one.  The
basics of role-based access control are outlined in chapter three.  Chapter
four examines role hierarchies and the inheritance of privilege.  Separation
of duties (somewhat oversimplified in the equation to the "two man rule")
addresses the issue of conflation of roles, although chapter five is rather
weak in terms of practical implementation.  Chapter six looks at the use of
RBAC with both mandatory (MAC) and discretionary (DAC) access control.  The
NIST (US National Institute of Standards and Technology) RBAC standard is
explained in chapter seven.

Chapter eight examines the intriguing idea of using role-based adminstration
to manage the assignments and permissions of RBAC itself.  (This material is
highly formal, and would require dedicated study by those attempting to
implement it.)  Enterprise access frameworks (EAFs) are proposed in chapter
nine, reaching back to mandatory access control for a kind of automated
assignment of permissions direct from corporate policy.  (Much of this text
is taken up with XML code.)  The relation of RBAC to various popular
technologies is suggested in chapter ten.  A short case study of the
transition of a company to RBAC is provided in chapter eleven.  Chapter
twelve deals with RBAC facilities in a number of commercial products.

The writing is frequently uneven and repetitious, but the concepts are
generally clear enough.  The book also uses lots of acronyms, and isn't
always careful about providing an explanation for them.

In regard to the stated audiences, most security professionals will find
much of interest and value in the first half of the book, and it would act
as a useful text in a number of security courses.  Software developers might
not find as much to their advantage.  The second half of the book is
questionable.  For those involved in the formal and theoretical study of
role-based access control, this work will have much merit, but that is a
select audience, and the demands on the reader will be significant.

copyright Robert M. Slade, 2005   BKROLBAC.RVW   20051106
rslade@vcn.bc.ca      slade@victoria.tc.ca      rslade@sun.soci.niu.edu
http://victoria.tc.ca/techrev    or    http://sun.soci.niu.edu/~rslade

Please report problems with the web pages to the maintainer

x
Top