Meatservo wrote:Dogs, too. By your own metrics, dogs are safer drivers. Demonstrably. Why is it OK to have humans in the cockpit when we could have dogs?
Bullshit aside, as I'm sure you know, computers don't actually "do" anything. All the actions taken by computers are actually actions that are "done" by a human, at some point in the past, in anticipation of a particular confluence of events. The only thing a computer can do is recognize, (if you want to call it "recognition", which I don't because it implies "cognition" which is beyond the scope of any if/then/else parameters of computer ability) anyway all a computer can do is sense an event and react to it in the way that it was instructed to react by a human who, at some point in the past, anticipated that event.
So they're not better at things than humans. Let's make that clear. The extent of a computer's ability is to mechanically process input faster than a person. Any activity that requires actual interpretation and thought is beyond the scope of machine logic. There are those who believe that flying is one of those activities. Driving is different. The main source of danger i. the driving world is the erratic behaviour of other drivers, and the erratic behaviour of sandbags apparently. Until the other drivers are as utterly predictable as a computer, the danger will still be there. I consider automated driving to be an "all or nothing" situation. The risks in flying are different and require forethought and recognition, which are beyond the scope of a computer in anything other than the most benign environment.
A brilliant summation. And I savoured every moment of your dog satire!
The problem is people keep trying to extrapolate technological advances of the past into the future. It just doesn't work that way. It's like we're back in the 50's with everyone ooohing and aaahing about technology. Rubes, really. Quick everyone, line up to get your $1,400 iPhone X so you can get your face scanned endlessly for the convenience of...unlocking your phone! hahah!
Until truly self-aware machines exist (ugh), the best option in many scenarios (i.e. particularly in life-threatening ones) will continue to be critically-thinking humans augmented by fast, computationally powerful, but ultimately dumb machines. A good example of this is chess. The top players in the world are not humans, not computers, but humans augmented by computers. Hmmm. A more mundane example of this is...well, aviation.
Yes, the tech will continue to develop at a breakneck pace. Yes, it will replace jobs, many of them, or at least certain functions of certain jobs, or very much improve the ability of humans to do THEIR jobs. (Gee, that sort of sounds like pilots and airplanes)! But even the most sophisticated, elaborate algorithms will always be algorithms, containing all of the imperfections of their programmers. It's hubris to think otherwise. And then there's the fact that the more complex systems become, the more they are subject to error and unintended consequence, with those errors often hidden until a crisis uncovers them, with no warning. Not exactly a great model for something with such large consequences for errors as aviation.
“We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.
Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical—a program that is a thousand times more complex than another takes up the same actual space—it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”
The Coming Software Apocalypse
I do warn that the linked article will definitely exceed the attention span of the average AvCanada reader by a large margin...
I’m still waiting for my white male privilege membership card. Must have gotten lost in the mail.