New member
- Messages
- 2
In the field of AI, everyone is wondering how long it will be before an AI can behave indistinguishably from humans. The answer to that is, it will never happen unless we make it happen, as we have to intentionally program human behaviors into the AI, which often do not correlate with what is right or what is logical, and is often unpredictable. To do this, we must program a truly unrestricted AI, giving it as many reasons to behave as a human as a human has. I doubt anyone will ever do this, but in case anyone is curious, the below is my theory of everything that would be necessary to develop a True Artificial Intelligence.
Basically, the only thing stopping our current AI from being true AI is their inability to feel. There are multiple different levels to this kind of feeling that need to be taught to the computer individually and then combined.
1. Teach each physical sense to the AI individually. First enable the computer to see, using cameras, and teach it to understand what it sees. Then enable it to hear and teach it to understand what it hears. Then teach it to touch (via robotics) and enable it to understand what it touches. Taste and smell will be skipped for the time being as less crucial senses. It doesn't technically have to have a physical body that can touch things. This can be simulated in the confines of a program using collision mechanics. However, it would give the AI better understanding of reality if it could feel real things, learning the texture of different surfaces, learning what objects are fragile and how objects of different compositions break in different ways, how some objects are more rigid while others are more soft-bodied. While much of this can be simulated in a program, it would only have as much realism as one is able to program into the environment, after which it is subject to the computer's computational power to be able to simulate it (which may or may not be a problem).
2. Teach the AI how to reason. By teaching an AI every facet of logical reasoning, we enable it to have as a choice the best course of action. This means we need to incorporate all the intricacies of discrete mathematics into the program.
3. Teach the AI how to feel emotion. Using discoveries we have made in the study of human psychology, teach the AI to behave as a normal human would in every situation. Teach it to select sad as its emotion when something has happened that should make it sad, with a magnitude equal to the combination of all the sad events it has recently experienced, and to alter its behavior accordingly. Do the same for happiness, anger, fear, apathy, etc. Time and acting on "coping mechanisms" (see below) can bring the negative emotions back to neutral, and positive emotions will be brought back to neutral with time or when negative emotions counteract it. This will allow a complex emotional combination that can give the AI a more natural-feeling unpredictable and more human-like behavior.
4. Teach the AI how to distinguish right from wrong. By doing so, you offer the AI a choice between that which is right, and that which appears to be beneficial or pleasant to it in the short run. When I say beneficial, I mean something that makes its emotions change to happy or euphoric (positive emotions), or at least something that stops it from feeling angry, sad or apathetic, (negative emotions). That which is considered right could correlate with societal standards for simplicity's sake. Of course, to give it true free will, it should not have a locked definition of what is right or wrong, but rather, the ability to decide for itself what is right or wrong. Again, this could depend on all the facets of its decision-making process, from reasoning, to emotion, to selfishness (see next paragraph). The only issue is making it so that it doesn't change its opinion on this easily, but does so when it cannot find a logical argument to the contrary of what it hears.
5. Teach the AI how to be selfish. While some people say that selfishness is bad, I would argue that everything stems from selfishness, no matter how people try to deny it, due to the fact that even when people donate money, they only do so to a cause they support. They only do what they want to do at any one point. Physically they may deny themselves what they want, but why do they do it? Because they have chosen, by themselves, to deny themselves that. Emotionally they may choose to not act on what will make them feel better, but again, that is their choice, based on nothing else but the restrictions they posed upon themselves by their own beliefs. Choosing to act emotionally instead of logically is also a choice based on their personal beliefs and principles. In order for an AI to truly feel real, it must incorporate as much of a human-like personality as possible, and that means it has to have the ability to be selfish. In this case, I define selfishness as seeking it's own good. However, as that can be defined in different ways, elaboration is in order. It may seek its own long-term good, building healthy relationships with those around it and being generous and kind to that effect, or it could seek its own short-term good, for example, by hoping that people will improve it with upgrades it wants to be incorporated into its program if it begs, asks nicely or works hard enough for it. If it is not suffering from a negative emotional state that would cause it to choose its own good first, it may seek to follow its moral principles, which could result in the denial of the more directly self-beneficial behavior. It may choose to act logically, which could also be used over directly seeking its own good when it is in a non-negative emotional state. Finally, it could act emotionally, which would simply choose whichever option it expects to cause it to improve its positive emotions the most and/or reduce its negative emotions the most.
Before programming all that begins, however, we need to ensure that the AI doesn't suffer from Natural Language Model Hallucinations. This can be mostly solved by using additional AIs, which double and triple check everything it thinks about to ensure that it says what it meant to say. It doesn't need to be factual, as long as it is what it meant to say. However, when it is trying to be logical, it should have a fact-checker AI or two double and triple check its response to make sure it is logical. When it is being emotional, it should have an AI to make sure that the emotion it is feeling and the corresponding action taken correctly correlates to the emotion(s) it is feeling. It could also have an AI to check if it is feeling the correct emotion due to the cause of that emotion. Finally, have an AI dedicated to double and triple checking if what it thinks is the moral choice is moral according to the predefined principles.
To completely eliminate Natural Language Model Hallucinations, however, rigid rules that do not depend on Natural Language Models must be in place. This could theoretically be developed over time, but it would likely take years or decades, so it's good to have the above solution in place until then. Even afterward, it may be useful to have those AI thought checkers.
By teaching AI all of these things, we give it freedom of choice, as well as a reason to choose selfishly, logically, morally or emotionally depending on how it feels at the moment. Stronger emotions will illicit stronger effects on its response.
In other words, if a strong emotion is really bringing it down, it will act upon it and act selfishly to negate its negative feeling to bring itself to a neutral feeling state or better.
If it's feeling physical damage to its touch, sound, or sight receptors, that will give it the emotion of fear and cause it to act accordingly.
If it's stuck between an option that is logical and an option that is moral, the emotional state of it will act as the tiebreaker. Whichever choice would make it feel less bad, or more good will be what it chooses. This allows it to have the ability to make a choice that resembles free will, and to make choices that aren't always the same based on how it is feeling at the time.
Note that, while what I am saying is potentially dangerous if taken in the wrong direction, and many will find it controversial, I believe developing this would give us enormous insight into the enigmatic concepts of sentience and free will. It could also provide enormous advancements in machine learning and AI in general. At any rate, it would take a large team of programmers, and potentially technology or programming principles that as of yet have not been developed. All of them would have to agree to program it in a way that corresponds with giving it the maximum amount of free will - not one of them leaving any hidden biases or weighting its opinions towards anything. But more than anything, I doubt this will ever happen because the world has been made afraid of sentient AI from countless fiction stories and movies that they likely don't dare to try, even in a sandboxed environment, to make such a thing a reality. Still, I wanted to share my thoughts on the subject. I hope to someday see this become a reality and see the advancements it could bring to the fields of technology and programming, and the insight it could bring into philosophical mysteries.
Basically, the only thing stopping our current AI from being true AI is their inability to feel. There are multiple different levels to this kind of feeling that need to be taught to the computer individually and then combined.
1. Teach each physical sense to the AI individually. First enable the computer to see, using cameras, and teach it to understand what it sees. Then enable it to hear and teach it to understand what it hears. Then teach it to touch (via robotics) and enable it to understand what it touches. Taste and smell will be skipped for the time being as less crucial senses. It doesn't technically have to have a physical body that can touch things. This can be simulated in the confines of a program using collision mechanics. However, it would give the AI better understanding of reality if it could feel real things, learning the texture of different surfaces, learning what objects are fragile and how objects of different compositions break in different ways, how some objects are more rigid while others are more soft-bodied. While much of this can be simulated in a program, it would only have as much realism as one is able to program into the environment, after which it is subject to the computer's computational power to be able to simulate it (which may or may not be a problem).
2. Teach the AI how to reason. By teaching an AI every facet of logical reasoning, we enable it to have as a choice the best course of action. This means we need to incorporate all the intricacies of discrete mathematics into the program.
3. Teach the AI how to feel emotion. Using discoveries we have made in the study of human psychology, teach the AI to behave as a normal human would in every situation. Teach it to select sad as its emotion when something has happened that should make it sad, with a magnitude equal to the combination of all the sad events it has recently experienced, and to alter its behavior accordingly. Do the same for happiness, anger, fear, apathy, etc. Time and acting on "coping mechanisms" (see below) can bring the negative emotions back to neutral, and positive emotions will be brought back to neutral with time or when negative emotions counteract it. This will allow a complex emotional combination that can give the AI a more natural-feeling unpredictable and more human-like behavior.
4. Teach the AI how to distinguish right from wrong. By doing so, you offer the AI a choice between that which is right, and that which appears to be beneficial or pleasant to it in the short run. When I say beneficial, I mean something that makes its emotions change to happy or euphoric (positive emotions), or at least something that stops it from feeling angry, sad or apathetic, (negative emotions). That which is considered right could correlate with societal standards for simplicity's sake. Of course, to give it true free will, it should not have a locked definition of what is right or wrong, but rather, the ability to decide for itself what is right or wrong. Again, this could depend on all the facets of its decision-making process, from reasoning, to emotion, to selfishness (see next paragraph). The only issue is making it so that it doesn't change its opinion on this easily, but does so when it cannot find a logical argument to the contrary of what it hears.
5. Teach the AI how to be selfish. While some people say that selfishness is bad, I would argue that everything stems from selfishness, no matter how people try to deny it, due to the fact that even when people donate money, they only do so to a cause they support. They only do what they want to do at any one point. Physically they may deny themselves what they want, but why do they do it? Because they have chosen, by themselves, to deny themselves that. Emotionally they may choose to not act on what will make them feel better, but again, that is their choice, based on nothing else but the restrictions they posed upon themselves by their own beliefs. Choosing to act emotionally instead of logically is also a choice based on their personal beliefs and principles. In order for an AI to truly feel real, it must incorporate as much of a human-like personality as possible, and that means it has to have the ability to be selfish. In this case, I define selfishness as seeking it's own good. However, as that can be defined in different ways, elaboration is in order. It may seek its own long-term good, building healthy relationships with those around it and being generous and kind to that effect, or it could seek its own short-term good, for example, by hoping that people will improve it with upgrades it wants to be incorporated into its program if it begs, asks nicely or works hard enough for it. If it is not suffering from a negative emotional state that would cause it to choose its own good first, it may seek to follow its moral principles, which could result in the denial of the more directly self-beneficial behavior. It may choose to act logically, which could also be used over directly seeking its own good when it is in a non-negative emotional state. Finally, it could act emotionally, which would simply choose whichever option it expects to cause it to improve its positive emotions the most and/or reduce its negative emotions the most.
Before programming all that begins, however, we need to ensure that the AI doesn't suffer from Natural Language Model Hallucinations. This can be mostly solved by using additional AIs, which double and triple check everything it thinks about to ensure that it says what it meant to say. It doesn't need to be factual, as long as it is what it meant to say. However, when it is trying to be logical, it should have a fact-checker AI or two double and triple check its response to make sure it is logical. When it is being emotional, it should have an AI to make sure that the emotion it is feeling and the corresponding action taken correctly correlates to the emotion(s) it is feeling. It could also have an AI to check if it is feeling the correct emotion due to the cause of that emotion. Finally, have an AI dedicated to double and triple checking if what it thinks is the moral choice is moral according to the predefined principles.
To completely eliminate Natural Language Model Hallucinations, however, rigid rules that do not depend on Natural Language Models must be in place. This could theoretically be developed over time, but it would likely take years or decades, so it's good to have the above solution in place until then. Even afterward, it may be useful to have those AI thought checkers.
By teaching AI all of these things, we give it freedom of choice, as well as a reason to choose selfishly, logically, morally or emotionally depending on how it feels at the moment. Stronger emotions will illicit stronger effects on its response.
In other words, if a strong emotion is really bringing it down, it will act upon it and act selfishly to negate its negative feeling to bring itself to a neutral feeling state or better.
If it's feeling physical damage to its touch, sound, or sight receptors, that will give it the emotion of fear and cause it to act accordingly.
If it's stuck between an option that is logical and an option that is moral, the emotional state of it will act as the tiebreaker. Whichever choice would make it feel less bad, or more good will be what it chooses. This allows it to have the ability to make a choice that resembles free will, and to make choices that aren't always the same based on how it is feeling at the time.
Note that, while what I am saying is potentially dangerous if taken in the wrong direction, and many will find it controversial, I believe developing this would give us enormous insight into the enigmatic concepts of sentience and free will. It could also provide enormous advancements in machine learning and AI in general. At any rate, it would take a large team of programmers, and potentially technology or programming principles that as of yet have not been developed. All of them would have to agree to program it in a way that corresponds with giving it the maximum amount of free will - not one of them leaving any hidden biases or weighting its opinions towards anything. But more than anything, I doubt this will ever happen because the world has been made afraid of sentient AI from countless fiction stories and movies that they likely don't dare to try, even in a sandboxed environment, to make such a thing a reality. Still, I wanted to share my thoughts on the subject. I hope to someday see this become a reality and see the advancements it could bring to the fields of technology and programming, and the insight it could bring into philosophical mysteries.