Mouse-pointing morons

Story: Point and click GUIs: why are we still stuck with them?Total Replies: 34
Author Content
jezuch

Jul 11, 2010
6:11 AM EDT
I refuse to be a mouse-pointing moron; that's why I use the command line!
tracyanne

Jul 11, 2010
7:17 AM EDT
I prefer to be a Mouse pointing Intellectual
gus3

Jul 11, 2010
9:16 AM EDT
I use a cursor-manipulating device. (The text insertion bar is a "caret.")

I'll leave the judgment to others whether or not I'm an intellectual or a moron.
Bob_Robertson

Jul 11, 2010
9:55 AM EDT
Golly, can't we all just get along?

(playing nethack in a Konsole window, using dselect to install "General Purpose Mouse", etc)
gus3

Jul 11, 2010
10:18 AM EDT
dselect? You Debian fanboi!
jdixon

Jul 11, 2010
10:42 AM EDT
> ...using dselect to install "General Purpose Mouse"

Some distro's install gpm by default. :)
helios

Jul 11, 2010
11:05 AM EDT
You know...this poses an interesting question...just how would one interface the computer without the current standard. The choices are not so obvious once one begins to think about it. Of course most people couldn't fathom the concept of incandescent light vs the lantern either.

Let the Edison vs Tesla argument rage.

Does touchscreen bump us up to the next stage of interface evolution? Not really...there is still interaction with the machine and in my mind, that's a much more cumbersome way of doing what we do now via keyboard and mouse.

I think Roddenberry got it right.

"Computer, give me the core temperature of the plasma generators."

Voice seems to be the next step...or so it would seem.
Bob_Robertson

Jul 11, 2010
12:48 PM EDT
The problem with voice is, while it is easy to have the computer "speak", there is just far, far too much fuzzy logic, illogic, grammar violation, etc, that I just don't see a way to have a computer understand human speech without exactly the kind of "artificial intelligence" that makes any kind of input possible.

How about direct neural link? Just subvocalize, and there it is.
Kagehi

Jul 11, 2010
2:06 PM EDT
Heh, Jezuch. I want to see a demonstration of you playing a game or using Gimp via CLI. lol

Seriously though, the huge limitation I see at the moment, and touched on by Bob, is that to *get* a better interface you have to either bury the logic and processing power in your display (or the like), or you have to sacrifice most of what you already have, to get the result. Only things I can think of that would be really a major game changer would be:

1. Voice - requires a lot of processing that robs you of the same, in memory and CPU use. 2. Wide spread 3D - Some people this is useless for, also, see #1, only worse. 3. Wii like stuff, only without using the wand. I.e., reading your hand motions.

#3 is probably the least problematic, in that you just need hardware/software to read that sort of stuff, and interpret it right, and we are close, with some of the new gamer concepts. But, it still requires specialized stuff that doesn't *quite* exist yet, or would take more processing time from other things.

Basically, the GUI is persisting because the alternatives either don't work yet, or would swallow your machine's resources to run right now. And, even in Trek, they still had GUIs, and some equivalent to CLI, they just where used when they made sense, not all the time, when they didn't. They where basically, "Design for this purpose", types, like you find in games, where what you are doing results in a completely different setup, for that game. Yet, even there.. some standards develop, just out of practicality, or a lack of design to confuse the hell out of people trying to play them, on the assumption that the game will *work like* others (like FPS, where some joker flipping the mouselook, then not giving you a way to correct it, can leave you stumbling like an idiot for the first few hours, while trying to play it).
jezuch

Jul 11, 2010
2:50 PM EDT
Quoting:The problem with voice is, while it is easy to have the computer "speak", there is just far, far too much fuzzy logic, illogic, grammar violation, etc, that I just don't see a way to have a computer understand human speech without exactly the kind of "artificial intelligence" that makes any kind of input possible.


Lojban, the "logical language", was designed partly with machine interaction in mind. It's supposed to be logical and unambiguous at all times, even at the level of individual sounds (particular particles of speech can be recognized by just looking at them, and each of them has very strict rules about how they should be formed to avoid confusion with other types). So that cuts most of the baggage, but the core - actual understanding - still remains. I guess people will still insist on natural languages, but in incremental steps of less and less dumbed down versions of them (the most primitie form exists today - simple commands consisting of single words).

Yeah, talking to a computer would be super-cool. But then I would completely lose the ability to use my hands to write ;) (My handwriting suffered a lot already...)
Sander_Marechal

Jul 11, 2010
4:46 PM EDT
Tiling Window Managers are the future. Especially these days when monitors get bigger, wider and it becomes ridiculously cheap to use multiple monitors. As soon as people realise that half the time they are dragging windows around instead of getting work done, they will become popular.

I use Awsome WM + Vimperator. I only touch my mouse for doing graphics stuff (Gimp, Inkscape).
tuxchick

Jul 11, 2010
7:45 PM EDT
Windows 3.1 had tiling and stacking. And yes, those are very nice to have.
jacog

Jul 12, 2010
4:25 AM EDT
"Talking" commands to your computer is too time consuming. Point&click or keyboard interaction is much more time efficient.
helios

Jul 12, 2010
8:26 AM EDT
Which brings us back to the question...is there any other HUI that will replace what we have now. Aside from the aforementioned neural link...I can't think of anything....unless there would be a combination of speech and touch such as in Minority Report... maybe the screen embedded in the top of your desk laying flat instead of upright in front of you.

What ever it turns out to be...we're not going to see anything but mouse and keyboard in our lifetimes.

Or so I'm guessing...

h
jacog

Jul 12, 2010
9:09 AM EDT
Here you go, your very own brain control user input thingy. And you'll look like you belong in a sci-fi movie while using it.

http://www.emotiv.com/
helios

Jul 12, 2010
9:25 AM EDT
Now if they want to come out of the stone age and develop this for Linux, we'd have something to talk about...or at least hope for. We have wheel chair-bound kids that would benefit from this. I am sure it is in first-adapter stage but still.

Maybe...

h
jacog

Jul 12, 2010
9:31 AM EDT
Mostly aimed at developers right now, with an sdk available. The sample applications on there seem to be fairly good examples of "real world" use cases, but I can't imagine typing a document with this thing though as it's done using an on-screen keyboard.

It's a $300 purchase, so I won't be trying it out on a lark either. :)
gus3

Jul 12, 2010
10:19 AM EDT
Dasher has supported eye-movement tracking for a while.

http://www.inference.phy.cam.ac.uk/dasher/Demonstrations.htm... (scroll down for videos)
tuxchick

Jul 12, 2010
10:40 AM EDT
Most GUIs are slow and inefficient, and getting worse with the modern trends of either removing features because some dev doesn't like how all those options look, or making users to click through even more tabs and menus to find anything, like they're micro-organizing a file cabinet. For something that you use once in awhile, or are new to, some kind of helpful self-documenting interface is a nice thing. But most devs stop there and don't think of experienced users, and how to make their graphical interfaces faster and more efficient. Pointy clicky GUIs themselves aren't bad, like everything else it's all in the implementation.

gus3

Jul 12, 2010
11:46 AM EDT
TC,

Would you say the same of the Mac OS X interface? I'm not in a position to know, but maybe you are.
Sander_Marechal

Jul 12, 2010
1:07 PM EDT
Quoting:"Talking" commands to your computer is too time consuming.


It depends I think. For information retrieval it's quite fast, provided that language recognition works well. E.g: "computer, I want to go to Munich by car. Show travel directions and a weather forecast". But for getting work done, speech doesn't work so well.

Quoting:unless there would be a combination of speech and touch such as in Minority Report... maybe the screen embedded in the top of your desk laying flat instead of upright in front of you.


Try to play with a table or wall touch PC if you get a chance. You'll tire out your arms within the hour. Flailing around your hands Minority Report-style looks cool, but doesn't really work if you're using it for a prolonged time.
Bob_Robertson

Jul 12, 2010
2:51 PM EDT
> For information retrieval it's quite fast, ... But for getting work done, speech doesn't work so well. As was addressed above, _StarTrek_ used whatever medium was convenient at the time, and a GUI and buttons are sometimes exactly the right things.

If there is anything utterly unusable, the Microsoft "table-top" was pure marketing.
DarrenR114

Jul 12, 2010
4:04 PM EDT
The trouble with developing a neural interface is that the brain doesn't interpret language in a standard manner. For instance, the word 'dog' will conjure up different images for different people, as will 'perro', 'grandmother' or 'abuela'.

With sound and vocalized utterances, there is a enough similarity between different voices that it is possible for the computer to discern what words are spoken. But the differences in brainwaves for the same word between different people make a neural interface exponentially more difficult for tasks beyond the most rudimentary.
jacog

Jul 12, 2010
4:29 PM EDT
Darren, neural interfaces are not about thinking words, but rather a control system that works on thought patterns that are memorised, so some calibration is required.
tuxchick

Jul 12, 2010
5:35 PM EDT
gus3, it's been awhile since I've done any real Mac-ing, so I don't recall individual apps very well. I know I don't care for a big fat dock, and I don't like how awkward multi-tasking is on the Mac. I also don't like taking away physical buttons and replacing them with dippy icons, which never seem to be where they were the last time you had to find them. I'll wager the volume of the cussing from Mac users resorting to sticking paper clips into tiny holes to eject CDs, or trying to figure out how to turn the darned things off are audible across the skies.
chalbersma

Jul 12, 2010
6:10 PM EDT
What about taking a cue from some of my favorite TV shows.

A hand based input graph type dealie thingy (official name I'm copyrighting it ;)

instead of a keyboard and mouse have a 10 point mouse.

See the Organic Computers in Eureka and the Wraith Spaceships interface in Stargate Atlantis.
helios

Jul 12, 2010
7:19 PM EDT
...and the Wraith Spaceships interface in Stargate Atlantis.

Throw in Jewel Staite and I'll work through everything else. h
gus3

Jul 12, 2010
9:05 PM EDT
Get in line behind me, Ken.
chalbersma

Jul 12, 2010
11:01 PM EDT
@Helios

Why the hell not!

But really a mouse/keyboard integration system that allows you to manipulate objects onscreen like you would in a garage seems like it could simplify input an allow you to have infinite control. After all the amount of information that can be stored in a curve is limited only by your ability to read it.
DarrenR114

Jul 13, 2010
7:17 PM EDT
@jacog - I agree ... and that's part of why complex neural interfaces are impossible with the current approaches.

Maybe if more extensive use of statistical modelling (like is done with the modern speech recognition systems) was done with non-invasive neural sensors (ie EEGs) then perhaps the capability to interpret complex thought would be possible. Sort of like using dozens of people with hours of audio recording to create a single Acoustic Model in a Speech Rec system.
jezuch

Jul 14, 2010
2:21 AM EDT
There was an article in National Geographic about an arm prosthesis that has a near-neural interface. They didn't connect the nerves directly because of issues with keeping the connection clean, but instead they used unused arm muscles as a bridge. The patient says that after a bit of training it feels just like a real arm (only without feeling, but that's in the works too).

The brain has an amazing capability to rewire itself if given enough practice.
gus3

Jul 14, 2010
3:30 AM EDT
@jezuch:

Some sensations (hot and cold) via prostheses are already available.
jacog

Jul 14, 2010
3:46 AM EDT
Reminds me of an prosthetic eye replacement I read about a few years ago. the user could see the world at a resolution of 8x8 pixels, which does not sound like much, but apparently the brain has quite a talent for filling in the blanks.
Bob_Robertson

Jul 14, 2010
11:46 AM EDT
> apparently the brain has quite a talent for filling in the blanks.

Seen your blind spots recently?

I recall some Nova or SciAmerican or such back in the 70s, where a man put on goggles that optically inverted everything. He wore them constantly for weeks.

At first, of course, everything was up-side-down and backwards. But in something around two weeks, he realized he was seeing everything right-side-up again just like normal.

After confirming that he indeed had lost all the hesitation and mis-judgement that he had had at the start, that he was operating perfectly normally again, they removed the goggles.

I don't recall how long it took to "revert", but for nearly or about the same time as with the goggles, without them he was seeing everything inverted!

Then, one day, he wasn't.

The mind really is _software_.
jezuch

Jul 14, 2010
5:20 PM EDT
Quoting:The mind really is _software_.


The mind is software, but it runs on self-modifying hardware :)

Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]

Becoming a member of LXer is easy and free. Join Us!