Monday, March 12, 2012

Artificial Intelligence and Neural Network Algorithms


By Tim Oriani, Regis '12

Imagine you and an accomplice, while attempting a bank robbery, have been apprehended by the authorities. However, due to legal technicalities, the courts can only bring petty firearms possession charges, unless one of you confesses.

So each of you is offered two choices: Cooperate or Defect, i.e. confess or remain silent. Should one of you confess and the other remain silent, the Cooperator would be set free, and the confession would be used to sentence the Defector for a long period of time. In the case that both you and your accomplice choose to confess, both will be sentenced, but also released early for cooperating. If both remain silent, however, the firearms charge will put both of you in jail for a very short period of time.

The two of you are kept in separate rooms, unable to communicate. You must make your decision alone before finding out the results.

Thus was the so called “Prisoner’s Dilemma” presented to the Senior Seminar by Mark Joinnides, Regis ’03 and a bioengineering consultant.

Prior to his visit, he had directed us to an online program in which we competed against an AI for several rounds, choosing in each round to Cooperate or Defect.  We gained points according to the decision of both parties. Each party attempted to learn the strategy and techniques of the opponent and adapt his own to win.

 
For our own minds, this does not seem so daunting a task. But what about a computer? How does a man-made program, set in its conditions by the unbreakable rules and restrictions of a computer code, keep up with the flexibility of an organic mind such as our own? These questions led the Seminar and Mr. Joinnides to explore the nature of intelligence, trying to discern the line between fluid thinking and just following rules.


Mr. Joinnides explained the situation from the computer’s perspective as a two-by-two matrix, because the Prisoners’ Dilemma has two players each capable of two different choices, and therefore four possible outcomes. Seems simple enough for a computer.

But now imagine a computer playing chess against a human. There are twenty possible opening moves in chess. The computer must then be programmed to incorporate a twenty-by-twenty matrix.

Moreover, unlike the Prisoner’s Dilemma, a game of chess constantly changes move by move. Every one-by-one unit of the opening matrix leads to another matrix, possibly much larger than twenty-by-twenty. And every unit of each one of those matrices leads to another one. And another and another, until a game reaches its conclusion. But that’s not intelligence is it? I imagine it as a building, albeit an unimaginably enormous one, that a computer program travels through, floor by floor, turn by turn, to reach one of many destinations, hopefully a state of victory.

But what if this building could change and evolve? What if its floors and pathways changed every time you played against it? Certain pathways grow stronger, levels rearrange themselves, and the ultimate outcome becomes more assured as the computer grows more and more accustomed to the human player’s game strategy.

A program instructed to improve and grow better and better at its specific task, is that intelligence? As our Seminar discussion progressed, we searched for an answer, but couldn’t discover one that was clear-cut. As it turns out, it’s not just a question of when computers gain intelligence, that quality that so defines our species as human and superior to all others, but of when the human race will be ready to accept the existence of an intelligent computer.

As society integrates Artificial Intelligence more and more closely with everyday life, from the genetic algorithm that runs Pandora to AI robots performing surgery or carrying out military orders, issue of accountability grow more urgent. Who will be responsible when an Artificially Intelligent computer makes a mistake, possibly in a life or death situation?

No comments:

Post a Comment