BLOG

Are Humans Intelligent Enough to Have Their Intelligence As a Model?

Are Humans Intelligent Enough to Have Their Intelligence As a Model?

Dr. Nima Schei and Dr. Hamed Tabkhi are AI-passionates and friends and they have been working together for quite a long time, discussing their views about life,  AI, consciousness, and all the things in between! 

In this first part of our series of conversations around AI, we are going deeper into the minds and thoughts of HummingbirdsAI founders, Dr. Nima Schei and Dr. Hamed Tabkhi. You can read this interview here or you can listen to it on Hummingbirds AI podcast on our Youtube channel.  In this first conversation Hummingbirds AI founders explore everything around artificial intelligence and the human mind from a different perspective.

Nima Schei: A lot of people talk about AI, but so many wouldn’t know what AI is. It stands for artificial intelligence but the name has been so much used and overused that it’s also the question: what do we mean with AI?

Hamed Tabkhi: Generally the idea of AI comes from a very human-centric view. How we can create an artificial intelligence model which initially aims to replicate human intelligence. So in this case we can get even more fundamental: are we intelligent to have our intelligence as a model? or How about cats or dogs? Assuming that what we mean with intelligence, in this case, is the human level of intelligence or what we perceive as intelligent as humans, then the question is how we can replicate or create something that acts like us, thinks like us but is artificial and not being biological.

That’s the more basic view of AI which was the aim of the founders of the field. But right now the way we are using AI is for automation. AI scientists and researchers are seeking to solve the rules and foundations of intelligence and how we can formulate that. Hopefully, it will empower us to create super-intelligent systems beyond human capability to address some real-world problems beyond the human brain. And at the same time help us with our daily issues or help us to drive our car, do our laundry or improve our house. 

Nima Schei: Do you trust them more than humans? Do you trust your Uber driver more or a Tesla auto-driving?  When it comes to specialized AI teaching, when we talk about AI in this case we mean neural nets and deep learning specifically.

Hamed Tabkhi: Let’s say driving cars, yes, I would trust AI more than humans. I feel that AI is more reliable. It doesn’t get emotionally distracted, playful, or drunk! We are talking specifically about self-driving cars in this context, of course, I don’t mean that we should trust AI for every aspect of our life. Think of the elevator, in the past a person was inside asking you which floor. We got rid of that and we automated the process. Having an operator wouldn´t add to your trust. It would make you more vulnerable.

That’s going to be the same view with the cars in the future and I feel that in the future people are going to look at us like we were crazy. I have a passion for AI as well so I am more curious to see how it’s gonna operate. This has always been a source of curiosity for me especially. The problem is going to be that it’s not that tomorrow everybody will decide to have self-driving cars or everybody self-driving cars right? so it’s going to be a gradual adaptation. I am worried that they all get harassed by human drivers by overspeeding or just doing crazy maneuvers. That might be a problem so the challenge of AI and humans coexisting together in the context of self-driving cars is a very interesting social challenge that I’m really curious about and I’m waiting to watch.

Nima Schei: Human – machine conflict. The same way as we have human-wildlife conflict in Africa or the Amazons or any places that humans invaded nature so that would be the same. Human-machine conflict. 

Hamed Tabkhi: Think about a classroom if we had this kid that was following all the rules and was very organized. He or she used to be harassed by other classmates or be laughed at. Human drivers would try to take advantage of these self-driving cars. 

Nima Schei: If we give the AI the opportunity to self-learn and learn from the environment and have creativity. First of all, is it possible to give creativity to AI?

Hamed Tabkhi: That’s a very good question, I am so happy you brought that up because at the end of the day self-driving cars and AI in general end up being the problem of the data, and what is the source of your data. I feel that there are ways we can train AI to be very human-like or be emotional or be sensational.  It just depends on the kind of data. If I have a bunch of artist friends I feel that if I try to collect data from them, I use them to teach an AI  it’s going to be a more artist’s AI with more emotional or more extroverted kind of decisions. But does it know it is emotional? or it is acting sensational or it is very social? Perhaps not, because the whole deep learning paradigm is based on looking at the neural net, there’s a black bias,  you just feed the data and discriminate it.

You try to understand the differences and generate the data that seems real, very close to the human response but there is not that much context or self-awareness. Traditionally old-fashioned AI experts or machine learning scientists will go with the generative models, Markov decision processes, or other models that basically capture the properties of a problem that you want to understand or tackle. Let’s say I want to teach an AI English or French, I need to really model the entire language so you understand all the vocabulary and semantics. But that needs a lot of modeling and understanding all these rules and rule-based design. But the other way that I can teach an AI English or French is that I just give a bunch of words and sentences and talk in that language and train it to understand it without really knowing all those rules but still the AI can speak English or you can understand its English. So, two different paradigms. But if you ask the AI, the deep learning: Can you describe English for me? What are the semantics or grammars of English?

Nima Schei: But for humans sometimes it is the same. Think of toddlers for example.

Hamed Tabkhi: In humans for example when we learn our mother tongue it is our neural net learning by a very data-driven approach. But let’s say then we go to school and then we understand the grammar, the rules of writing or speaking. Then we get more sophisticated. So perhaps it’s a combination of both. I would say that humans for sure are definitely data-driven machines, especially in our early ages.  I think most of our personalities and character have been shaped or created in the first three or four years of our life. During that period we are just nothing rather than a neural net that we just train or a synopsis and create it based on experiences that we are doing and through a reinforcement learning approach. So, at some point the question is, do we consider a baby intelligent?

Well, it’s not intentional.  but at some point, we also started to develop some critical thinking and we started to produce new knowledge. Everyone has critical thinking or is it just some people. Perhaps we are not the same, we don’t have the same level of intelligence perhaps although I believe that we all have the potential. potentially we all have a similar level of intelligence but the experiences and the education we go through, change that. It is getting very philosophical but what is intelligence in the end?  I think at the end of the day answering the AI intelligence question with the question of what is our definition of intelligence is not an answer to the true intelligence problem because the other part is that for training a neural net you need to have a lot of data examples but not for humans, we can learn with much a smaller number of examples and we can also transfer expertise. Sometimes in AI this transferability or this domain shift is not as good as in humans and those are very important properties or features of intelligence systems. There is a huge trend right now on how to learn over a smaller number of data. Let’s say I’m just using a data-driven approach and training a self-driving car to drive on U.S highways. Does it work in Iran’s or China’s highway or Saudi Arabia? I’m not sure you know still we need to work on those aspects.

Dr. Nima Schei is an AI Entrepreneur, vegan, animal rights activist, digital nomad, and financial markets geek, as well as  CEO at  @BELresearch & Hummingbirds AI. 

Dr. Hamed Tabkhe is Assistant Professor at the UNCC. He is the founder and director of the Transformative Computer Systems and Architecture Research (TeCSAR) lab.

Want more information?

Subscribe to the Hummingbirds Newsletter for fresh information in your inbox every week