Last class we talked about categories of algorithmic problems and complexity thenory.
The next step beyond this is to ask how these categories such as "recursive enumerable language" - i.e. what a Turing machine can do - compare to what we can do. Can a Turing machine be as intelligent as a person? What does that even mean?
So today's discussion topic is AGI - artificial general intelligence. More generally this is part of the "philosophy of mind". Here are some starting points to read about all this.
And here's a longer list of references and philosophers ... some I agree with and some I don't : my marlboro philsophy of mind links
What do you think ... and what is your argument why you think that?