Artificial Intelligence…or Just Software?

AI holds a lot of promise, that may never be realized or live up to its hype. And then there is always the demarcation problem of what separates an actual intelligence from merely an advanced software program (Chinese Room thought experiment). A lot of things that may be classified as AI are merely productivity-enhancing software. If I want to play chess better, a software program can assist me by telling me the optimal move (within certain time constraints) against a hypothetical opponent, but is that intelligence?

– Clinical diagnostics (all types of diagnostics, really,) can easily be performed by AI. In fact, AI is uniquely well suited to that sort of task. Most diagnostic scans — like FDG-PET — can be accurately interpreted by narrow and simple artificial intelligences. Simple complaints — for e.g., a rash like pompholyx — can be very accurately diagnosed via a combination of reverse image search and questionnaire. AIs can interpret blood chemistry and suggest medication. If they have large enough datasets, they can also run meta-analyses, post-marketing statistical surveys, and much more.
…It’s not that they AIs are unable to diagnose disease — to the contrary, I’m convinced they can already do a better job than most physicians — but that there are a hell of a lot of red tape and legal liability issues.

Hmm but is that intelligence…or just sophisticated statistical software that is making an inference from inputted data, and choosing the best outcome when constrained by a pre-determined threshold of statistical significance?

Also, many tasks have not been automated by AI. As for diagnostics, when someone with rectal bleeding (I chose this as an example because it’s a pretty common complaint) goes to the doctor, the doctor defers to a ‘risk profile’ based on the patent’s symptoms, age, medical history, and family history. This profile, which is extrapolated by prior patient data that is complied in a large database, determines the most appropriate course of action for the doctor. But due to litigation, most doctors will error on the side of caution. More or less, a middle-aged man with bleeding, even if there are no other symptoms, will still have to have an invasive diagnostic test performed, the colonoscopy – a long flexible tube that is inserted under anesthesia and connected to a camera. A century ago…and then all the way up to the 80’s, doctors used a short, metal rigid tube. But still, despite decades of technological advancement, including AI, diagnosing still involves direct visualization. Same for diagnosing lung cancer. Or bladder cancer.

Awhile back, Dan Luu wrote a post about AI, using healthcare and customer service as an example:

While replacing humans with computers doesn’t always create a great experience, good computer based systems for things like scheduling and referrals can already be much better than the average human at a bureaucratic institution2. With the right setup, a computer-based system can be better at escalating thorny problems to someone who’s capable of solving them than a human-based system. And computers will only get better at this. There will be bugs. And there will be bad systems. But there are already bugs in human systems. And there are already bad human systems.

If AI is defined to mean ‘a computer performing a cognitive-type tasks’ then, yes, AI has made huge strides and likely will continue to do so, but there are still enormous gaps that may likely never be filled.