When is an Automation Truly an AI?

Joseph Pajot
Joseph Pajot
Altair Employee

The term artificial intelligence is used everywhere.  Is it possible to distinguish between clever automation and true intelligence?

Recently, Hollywood has released a movie based on my favorite book, Dune.  I first read the sci-fi novel as a teenager and have reread the first book in the series multiple times over the following decades.  This new movie has motivated me to revisit the other books in the series.   Despite futuristic settings, there are no computers or artificial intelligences in the fictional universe, which has chosen to outlaw any machines in the likeness of the human mind.   Despite the prohibition against using such technology, the topic of artificial intelligence comes up frequently in dialogue between characters.  The fourth book, God Emperor of Dune, has an interesting exchange on the difference between an automation and an intelligence that felt relevant today as artificial intelligence begins take root in our everyday work.

In our real world, some have defined artificial intelligence as machines performing seemingly intelligent tasks, in contrast to the natural intelligence exhibited by humans.  The computer chess program I could never beat as a child in the 1980’s sure seemed intelligent, but it certainly didn’t use modern machine learning to achieve that intelligence.  Instead, it used a rigid set of rules to play the game.  Specifically, a human programmer provided logical rules on how to react to the events within the game.   This leads me to the one of the quotes in the novel from Leto II:

“Intelligence creates. That means you must deal with responses never before imagined.  You must confront the new.”

The chess program could react to a near infinite number of permutations on the game board.  Given chess’s complexity, it is conceivable that some board states may not have ever been seen in the history of chess.  Yet these logical rules would allow it to perform adequately against new game states.  According to Emperor Leto, this chess program is intelligent.

Now in contrast, let’s imagine a CAE program that post-processes automotive crash results by extracting accelerations at preset hard point, identified for example via naming convention or id.  With a click, a summary of the vehicle’s key performance indicators is generated.  Is this also intelligent?   Not if the rules are too rigid to accommodate new vehicles with new hard point names or ids.   This rigidity makes the program only a time-saving automation, albeit a valuable one!

“If it falls outside your yardstick, then you are engaged with intelligence, not with automation”

In machine learning, we train models to mimic human judgement.  The classic example of image recognition to choose between a cat or a dog is good for discussion.  How does this model deal with the new?  If shown an image of an elephant, it still can only choose between a cat or a dog.  This is seemingly a problem handling new information.  This recent blog on outlier and novelty detection suggests a solution to this limitation.  Intelligence doesn’t inherently require correctly predicting on the unknown, it only requires recognition of it.  This is why we include novelty detection in our experiments on part identification with shapeAI in HyperWorks.  The image below shows collections of parts based on their predicted labels.  The predictive model was trained to identify the geometries of bolts, nuts, and washers, but it also identifies the plates as unrecognized shapes (as evidenced in the generated parts sets on the far left of the interface).  This recognition is one way to confront the new in a way I believe Leto would approve as intelligence.

image

I find these philosophical ideas useful to organize my own thoughts, although it is admittedly risky to base world views on a fictional character living 8,000 years in the future.  Regardless, it’s always useful to learn from multiple perspectives.   I’d love to hear your opinions on where the line between AI and automations exists.