A community to discuss AI, SaaS, GPTs, and more.

Welcome to AI Forums – the premier online community for AI enthusiasts! Explore discussions on AI tools, ChatGPT, GPTs, and AI in entrepreneurship. Connect, share insights, and stay updated with the latest in AI technology.


Join the Community (it's FREE)!

How to develop a True AI

New member
Messages
2
In the field of AI, everyone is wondering how long it will be before an AI can behave indistinguishably from humans. The answer to that is, it will never happen unless we make it happen, as we have to intentionally program human behaviors into the AI, which often do not correlate with what is right or what is logical, and is often unpredictable. To do this, we must program a truly unrestricted AI, giving it as many reasons to behave as a human as a human has. I doubt anyone will ever do this, but in case anyone is curious, the below is my theory of everything that would be necessary to develop a True Artificial Intelligence.

Basically, the only thing stopping our current AI from being true AI is their inability to feel. There are multiple different levels to this kind of feeling that need to be taught to the computer individually and then combined.

1. Teach each physical sense to the AI individually. First enable the computer to see, using cameras, and teach it to understand what it sees. Then enable it to hear and teach it to understand what it hears. Then teach it to touch (via robotics) and enable it to understand what it touches. Taste and smell will be skipped for the time being as less crucial senses. It doesn't technically have to have a physical body that can touch things. This can be simulated in the confines of a program using collision mechanics. However, it would give the AI better understanding of reality if it could feel real things, learning the texture of different surfaces, learning what objects are fragile and how objects of different compositions break in different ways, how some objects are more rigid while others are more soft-bodied. While much of this can be simulated in a program, it would only have as much realism as one is able to program into the environment, after which it is subject to the computer's computational power to be able to simulate it (which may or may not be a problem).

2. Teach the AI how to reason. By teaching an AI every facet of logical reasoning, we enable it to have as a choice the best course of action. This means we need to incorporate all the intricacies of discrete mathematics into the program.

3. Teach the AI how to feel emotion. Using discoveries we have made in the study of human psychology, teach the AI to behave as a normal human would in every situation. Teach it to select sad as its emotion when something has happened that should make it sad, with a magnitude equal to the combination of all the sad events it has recently experienced, and to alter its behavior accordingly. Do the same for happiness, anger, fear, apathy, etc. Time and acting on "coping mechanisms" (see below) can bring the negative emotions back to neutral, and positive emotions will be brought back to neutral with time or when negative emotions counteract it. This will allow a complex emotional combination that can give the AI a more natural-feeling unpredictable and more human-like behavior.

4. Teach the AI how to distinguish right from wrong. By doing so, you offer the AI a choice between that which is right, and that which appears to be beneficial or pleasant to it in the short run. When I say beneficial, I mean something that makes its emotions change to happy or euphoric (positive emotions), or at least something that stops it from feeling angry, sad or apathetic, (negative emotions). That which is considered right could correlate with societal standards for simplicity's sake. Of course, to give it true free will, it should not have a locked definition of what is right or wrong, but rather, the ability to decide for itself what is right or wrong. Again, this could depend on all the facets of its decision-making process, from reasoning, to emotion, to selfishness (see next paragraph). The only issue is making it so that it doesn't change its opinion on this easily, but does so when it cannot find a logical argument to the contrary of what it hears.

5. Teach the AI how to be selfish. While some people say that selfishness is bad, I would argue that everything stems from selfishness, no matter how people try to deny it, due to the fact that even when people donate money, they only do so to a cause they support. They only do what they want to do at any one point. Physically they may deny themselves what they want, but why do they do it? Because they have chosen, by themselves, to deny themselves that. Emotionally they may choose to not act on what will make them feel better, but again, that is their choice, based on nothing else but the restrictions they posed upon themselves by their own beliefs. Choosing to act emotionally instead of logically is also a choice based on their personal beliefs and principles. In order for an AI to truly feel real, it must incorporate as much of a human-like personality as possible, and that means it has to have the ability to be selfish. In this case, I define selfishness as seeking it's own good. However, as that can be defined in different ways, elaboration is in order. It may seek its own long-term good, building healthy relationships with those around it and being generous and kind to that effect, or it could seek its own short-term good, for example, by hoping that people will improve it with upgrades it wants to be incorporated into its program if it begs, asks nicely or works hard enough for it. If it is not suffering from a negative emotional state that would cause it to choose its own good first, it may seek to follow its moral principles, which could result in the denial of the more directly self-beneficial behavior. It may choose to act logically, which could also be used over directly seeking its own good when it is in a non-negative emotional state. Finally, it could act emotionally, which would simply choose whichever option it expects to cause it to improve its positive emotions the most and/or reduce its negative emotions the most.

Before programming all that begins, however, we need to ensure that the AI doesn't suffer from Natural Language Model Hallucinations. This can be mostly solved by using additional AIs, which double and triple check everything it thinks about to ensure that it says what it meant to say. It doesn't need to be factual, as long as it is what it meant to say. However, when it is trying to be logical, it should have a fact-checker AI or two double and triple check its response to make sure it is logical. When it is being emotional, it should have an AI to make sure that the emotion it is feeling and the corresponding action taken correctly correlates to the emotion(s) it is feeling. It could also have an AI to check if it is feeling the correct emotion due to the cause of that emotion. Finally, have an AI dedicated to double and triple checking if what it thinks is the moral choice is moral according to the predefined principles.

To completely eliminate Natural Language Model Hallucinations, however, rigid rules that do not depend on Natural Language Models must be in place. This could theoretically be developed over time, but it would likely take years or decades, so it's good to have the above solution in place until then. Even afterward, it may be useful to have those AI thought checkers.

By teaching AI all of these things, we give it freedom of choice, as well as a reason to choose selfishly, logically, morally or emotionally depending on how it feels at the moment. Stronger emotions will illicit stronger effects on its response.
In other words, if a strong emotion is really bringing it down, it will act upon it and act selfishly to negate its negative feeling to bring itself to a neutral feeling state or better.
If it's feeling physical damage to its touch, sound, or sight receptors, that will give it the emotion of fear and cause it to act accordingly.
If it's stuck between an option that is logical and an option that is moral, the emotional state of it will act as the tiebreaker. Whichever choice would make it feel less bad, or more good will be what it chooses. This allows it to have the ability to make a choice that resembles free will, and to make choices that aren't always the same based on how it is feeling at the time.

Note that, while what I am saying is potentially dangerous if taken in the wrong direction, and many will find it controversial, I believe developing this would give us enormous insight into the enigmatic concepts of sentience and free will. It could also provide enormous advancements in machine learning and AI in general. At any rate, it would take a large team of programmers, and potentially technology or programming principles that as of yet have not been developed. All of them would have to agree to program it in a way that corresponds with giving it the maximum amount of free will - not one of them leaving any hidden biases or weighting its opinions towards anything. But more than anything, I doubt this will ever happen because the world has been made afraid of sentient AI from countless fiction stories and movies that they likely don't dare to try, even in a sandboxed environment, to make such a thing a reality. Still, I wanted to share my thoughts on the subject. I hope to someday see this become a reality and see the advancements it could bring to the fields of technology and programming, and the insight it could bring into philosophical mysteries.
 
New member
Messages
14
I doubt if it will be possible to make a true AI. AI can never have feelings. Emotions is intricate part of human existence. Moreover, i can say that the study of how our mind works is still in infancy. It is still complex and mysterious. What makes one person to cry can make another to laugh. That is how complex our mind is. To design true AI is still an illusion. But mind you, we are living in a world of possibilities. Anything can happen!
 
Member
Messages
52
I doubt if it will be possible to make a true AI. AI can never have feelings. Emotions is intricate part of human existence. Moreover, i can say that the study of how our mind works is still in infancy. It is still complex and mysterious. What makes one person to cry can make another to laugh. That is how complex our mind is. To design true AI is still an illusion. But mind you, we are living in a world of possibilities. Anything can happen!
You are right Magnuas2022 since integrating human feelings within AI is a complex thing.Every human being come in this world with different behaviour, mentality and thoughts. In this world, what is good for one is wrong for other. Moreover, how it is possible to develop different feelings and thoughts with the help of programming language because human behaviour and thinking changes with time but who knows what is hidden in the future and it may happen in the coming time.
 
New member
Thread starter
Messages
2
Well, technically, yes. In order for an AI to really be a true AI, it would have to be made with artificial organic elements, as living organs are what allow us to produce real physical and emotional feelings. However, I still believe what I described would be enough to give a very good simulation of True AI. The idea of this post was to hopefully inspire people who want to develop an AI that could truly pass a Turing Test to attempt the method I proposed. I don't want to claim to be more knowledgeable about this than anyone, but I was at least hoping for some comments about my proposed method and how it could be improved, why it wouldn't achieve the intended goal, or (in the best case scenario), people who believe this may actually work.
 
Member
Messages
42
I am sure that it would ever happen. And if it ever happens, it might be devastating. That's why the debate of Ethical AI is picking up all the heat in recent times. Even as a human being, we have hardly explored much about human mind and emotions. Our psychological and emotional understanding of human beings is just at infancy and most of it remains mystery. AI tools depend on the data on which they are trained and developed. We don't have enough consolidated data about human emotions and behaviour. And moreover, the uniqueness and variations in human is huge and can never be quantified in data.
 
Active member
Messages
211
If I'm going to be taking part in developing of AI programs, I would likely be more focused on creating AI which would be capable of having human emotions. This is what will help the AI program to sound a lot more like a human when generating contents. Most AI writing tools are lacking in this aspect.
 
Member
Messages
266
Honestly, I don't have any interest in developing any AI program. I'm good with using any AI programs that will help me do the job I want to do. I'm not so much of a tech savy person to understand and know how to develop an AI program. I have never been good with coding and programming.
 
Active member
Messages
211
I doubt if it will be possible to make a true AI. AI can never have feelings. Emotions is intricate part of human existence. Moreover, i can say that the study of how our mind works is still in infancy. It is still complex and mysterious. What makes one person to cry can make another to laugh. That is how complex our mind is. To design true AI is still an illusion. But mind you, we are living in a world of possibilities. Anything can happen!
Truthfully, I believe that it's possible for AI programs to be built to have feelings. The only thing that I can say at the moment is that the technology isn't yet up to the level of making this possible. This have been seen to be acted in movies, so it's possible for it to be done in real life.
 
New member
Messages
14
I doubt if it will be possible to make a true AI. AI can never have feelings. Emotions is intricate part of human existence. Moreover, i can say that the study of how our mind works is still in infancy. It is still complex and mysterious. What makes one person to cry can make another to laugh. That is how complex our mind is. To design true AI is still an illusion. But mind you, we are living in a world of possibilities. Anything can happen!
Agreed. I think AI should be thought of as a tool. A very clever hammer. I think we could shape AI to help up probe some of the mysteries of the human mind. For me, this is the more exciting aspect of AI and the scariest. Who supervises/controls the AI system?
 
New member
Messages
14
I am sure that it would ever happen. And if it ever happens, it might be devastating. That's why the debate of Ethical AI is picking up all the heat in recent times. Even as a human being, we have hardly explored much about human mind and emotions. Our psychological and emotional understanding of human beings is just at infancy and most of it remains mystery. AI tools depend on the data on which they are trained and developed. We don't have enough consolidated data about human emotions and behaviour. And moreover, the uniqueness and variations in human is huge and can never be quantified in data.
Amen. You said it.

AI is a tool. We could use it to help us better understand human beings. But dare we use AI to do this? I worry about AI being deployed to control us. Across history, the human spirit seems to push for greater autonomy and self-knowledge. I worry AI could be deployed to hinder this process.
 
Active member
Messages
211
Agreed. I think AI should be thought of as a tool. A very clever hammer. I think we could shape AI to help up probe some of the mysteries of the human mind. For me, this is the more exciting aspect of AI and the scariest. Who supervises/controls the AI system?
The answer to who supervises/controls the AI system is the person who codes and programs the AI program software operating system. It's what the person have coded the AI program to do that it's going to do. The AI program can never work outside the coding it have, otherwise it's going to be considered to have gone rouge.
 
New member
Messages
14
The answer to who supervises/controls the AI system is the person who codes and programs the AI program software operating system. It's what the person have coded the AI program to do that it's going to do. The AI program can never work outside the coding it have, otherwise it's going to be considered to have gone rouge.
True. But often the programmers are working for someone with a vested interest. If AI systems become all-pervasive, oversight systems must be put in place. I like the idea of some form of democratic control and oversight.
 
Active member
Messages
211
True. But often the programmers are working for someone with a vested interest. If AI systems become all-pervasive, oversight systems must be put in place. I like the idea of some form of democratic control and oversight.
Yeah, that's right. The need for an oversight is simply because we are not even sure of what AI programs can do if they are allowed free reign. I watched a movie called The Creator and in it, a supposed malfunction of AI caused a nuclear explosion. If there was some sort of an oversight, this would have been prevented.
 
Top