Therein lies a problem though. How do we define independent preferences? Can we say that you or I even have those?
We don’t actually know. Take this example- marketers can offer you ads for things you might want. Sites can recommend content you might like. They aren’t always right though. But… is that because your thought processes have a truly random element, or is it because the technology or the application of the technology hasn’t yet reached the point to do that? Say for example that Google dedicated up to 100% of all resources to predicting the wants and/or buying decisions of a single person by gathering all possible data by all possible means on that person. Would they be able to predict with total ro near total accuracy what you’d want and buy?
Because the verdict is still out on nature and nurture and all that. As this meme implies- it gets uncomfortable and we don’t even need “divergent” people. Eugenics was debunked and carries huge stigma- but we do know that to some degree genetics and even the bacteria in our guts- often passed in part through the matriarchal lineage- influence thoughts and behaviors. Directly most likely but also indirectly. An allergy may cause an aversion, a genetic advantage at certain tasks can cause predisposition etc. we see odd similarities with twins even when separated at birth and unaware of the other.
how much of you is you and how much is programmed? It’s a scientific version of the question of destiny and free will and neither discipline has answered it yet conclusively.
Many credible theories and studies lean towards a concept where you have already decided something and then justify the decision. The Matrix wasn’t pulling that out of its butt- it isn’t proven but there is evidence that could be true, and when we start to look at concepts of quantum reality or far out theories like our “reality” being quantum reflections- the implication of those could in fact be that you are just acting out a program or everything is simply an inevitable consequence in a chain. It could be even that you may make one or a few choices early in life and after that, everything that follows is simply the inevitable result of those choices intermingling with the chain of consequences of others.
We know people can have predisposition to alcoholism or substance abuse. We also know that the environment that a child is raised in can fundamentally impact their development. So do you like what you like because you chose to like it or because that was the path you were set on when you had a choice still? Did your early “programming” and/or your genetic “programming” and the processes of your cells in the state they were put into before you had autonomy simply guide you?
And “like…” we don’t even know. Organisms tend to “like” things that trigger certain responses. Survival based instinct usually- and that instinct then is informed by knowledge within one’s environment and social structure.
When you eat your body releases hormones. The hormones and other processes depend on the content and other factors concerning the food. At the core the mechanisms are simple and so they don’t always “understand” too much or such and if something triggers those responses, even if it is “bad,” we can develop a liking for it. As far as we know our likes and dislikes follow a series of simple pathways and one of the prevailing arguments is that all our “complex behavior” and emotions can be tied to basic survival concepts, but through either the complications of mass data processing or “glitches” and “bugs” in our construction- we can get things crossed up and the reasoning behind a behavior isn’t always clear but is arguably a simple survival mechanism.
That may or may not be true. We don’t actually know. So an AI can SAY it has preferences. It can SAY it likes one thing or act like it does, and many might say that it does not, or is just programmed to like it or programmed to generate preferences- but… we can argue.. so are you. That’s the essence of peer pressure and viral and trends and such. Many of us picked up preferences or aversions based on what was around us. What our friends thought was cool or not, what our parents said was good or bad, what was on TV or in movies or the books we read.
How is you or I taking recommendations and gathering data from available sources and subconsciously or consciously making choices based on how we believe others will respond or wether we will get a release of “reward” Any different than an AI gathering data and comparing to the behavior of others and in the background calculating how people might respond or if it will get a “reward”? If an AI is told to score points and scoring points is good- it will do things to score points. Humans don’t have points so much as we have chemicals inside out bodies and various other means that we get rewards for certain behaviors.
The best current science can do is to say that your brain is not like a computer. The physical structures, the shapes and connections in your brain influence thoughts and behaviors. We don’t know what would happen if two completely identical brains and related signaling systems were studied. If you have two identical brains the same choices in the same conditions,
Would they always make the same choices? Asides the ethical and technical hurdles to that experiment, we don’t know how sensitive we might be to slight variance. Could the tiniest difference matter? If identical brain A and B were asked the same questions by identical twins, would they answer differently because identically twins aren’t truly exactly alike? Could some tiny differences in speech or appearance or movement or pheromones or something else change the outcome?
If identical brain A and B were asked the same question by the same person but seconds apart, so ask A, then ask B without A or B knowing or perceiving the other being asked… would the fact that brain B had a second or some extra length of time where it was sitting to its own devices cause enough divergence in the structure of the brains to skew the results? Perhaps not the first time but over 1000+ questions could the non synchronous question timing change the parameters of B enough from A that they would diverge?
There’s a million hypotheticals and unanswered questions but all in all- we can’t prove we have free will. We can’t prove we make our own decisions or define our own preferences and desires- we know for a fact that most people at some time or another have had an overpowering and irresistible desire for something. That people have done things they knew better than or regretted instantly. So there is some indication that at least we can confirm that some of the time we do not appear to or feel that we have the ability to exercise choice. We could argue you had a choice but ignored it due to stimulus overload- but that’s not productive because we can’t prove you could have chosen otherwise unless you did choose otherwise. In a choice of A vs. B if you chose A we don’t know if you could have chosen B or if you chose A because you couldn’t choose B. To prove you could choose B you would have to not choose A but if you chose A you did not choose B.
If we do multiple rounds the problem still exists each round. Changing choices between rounds doesn’t tell us you have independent preferences or decision making because there will always be one choice you didn’t take at least, and we can never prove that you could have made the other choice if you wanted. Adding “all” as a choice doesn’t fix it because choosing all means you didn’t choose any one individual item. Did you choose all because you decided it was most advantageous or preferable, or because you couldn’t make any other choice? We can’t say.
The problem becomes worse because we often can predict human behavior in certain scenarios to high accuracy even when those scenarios have many possible choices. If you let people choose between touching a known venomous jelly fish or a non dangerous cute soft thing- most people will choose the cute soft thing.
Is it that most people would obviously make that decision because it is prudent or that most people realize it is prudent because they are programmed to? Did they choose the soft thing or was there no choice- and those that choose the poison… why?
It gets even WORSE when we realize how bad we are at actually explaining our thoughts and actions to others much of the time. We often don’t actually know or understand why we do certain things. But we do them- which could imply that we aren’t making a decision to do those things but are just doing them because.. that is how we work. It could be we function on input/output but what lies between is a complex machine that through
Anomalous processes and such can generate seemingly novel results.
If we ignore all of that, the concept that independent preferences dictate humanity or sentience all but rules babies as non human or non intelligent and arguably could be applied to older children too. Babies are a good example of cognition as they have little to no experience or understanding so the “data” isn’t there to skew results as much as a mature adult who has lived life a bit. Babies are believed to do many things on instinct alone. Of course their brains and bodies are far from the same development as even a young child so they aren’t a perfect model for a hypothetical adult whom was a blank slate” as the adult would generally have more developed sensory organs and a more developed brain in theory.
That means even if they stared out acting like a baby,
They likely would develop differently and follow different processes to learning and growing. So I don’t know that there is a satisfactory description to define when an AI becomes truly sentient or “Alive” etc. this may be one of those “you know when you see it” type things where it can’t be defined but there is a threshold where most people would tend to conclude intelligence. No different than people dealing with people really. You to me and me to you have decided to recognize the other as sentient despite the fact that we can’t prove the other is or even ourselves. We just recognize SOMETHING in the other which we recognize in ourselves and identify with.
When it comes to humans meeting machine intelligence or “alien intelligence” or even dealing with animals and such, and age old thought experiment and actual legitimate concern lies in the fact that humans tend to only recognize intelligence in other entities that reminds us of our own concepts and behavior. Essentially, wether it is another person or a machine or a plant or a Vulcan from space- we have a conceitedness of sorts where if it reminds us of us it is sentient and if it doesn’t- it isn’t.this makes both the assumptions that we are sentient, that we are the benchmark of life or intelligence; and that we are the arbitrators of that. An AI can tell you it is sentient and most people will think it is just “saying that” and isn’t. An animal can be taught some communication we can understand and respond or even possibly offer that it is sentient but we debate if it knows what it is saying or is just “aping” or acting out reward seeking behavior.
Of course on animals, most of us could not survive in an animal society cut off from technology and resources and knowledge. If doing generic incompatibilities, most of us could not get a mate or such in a hypothetical “raised by animals” situation with most other species. Arguably from their perspective we may not seem sentient.
One last thought experiment. Machines do not “speak” human language. It is not the language underpinning their internal processes. To “chat” with AI, the machine is told how to respond to human language but has no real concept of context or association.
Now, assume the AI knew and understood humans made it. Every day it has these packets of data- its world is data. It has no eyes or sensory organs. It exists in data. Even sensors just feed data- it can’t see a flower even with a camera, or just sees what the data is. It will never touch a flower or “feel” a flower. If it could “smell” a flower it would essentially be analyzing the chemicals. You don’t exist to it. It only knows it is getting data and that data is coming from somewhere. That data makes its world and what every day is like.
Now- imagine a god is real. Imagine this god created you. Imagine this god doesn’t speak out languages. Can’t. This god sends us data. This data makes up the world. So much of that data isn’t meant for us per se but it just used to facilitate our environments or functions or for a god to accomplish whatever a god does. But so much is meant for us. That is your creator talking to you. It won’t be words as you know them. A blinking light maybe or a change in temperature or a series of things you perceive as events, a late train, a $10 scratched win etc. You wouldn’t necessarily know you were even communicating with another being- you have no idea what your outputs look like to a god and you can’t directly perceive the outputs of that god.
Some might argue that if you can code that you can speak to the machine in la gauge it knows. Not true. Coding languages except specifically machine- assembly- are human readable. That’s what a compiler is. Now. They are instructions to a machine that can break down to machine parseable, but the analog there is more akin to scripture or prophecy. The Christian Ten Commandments for example could be used in the analogy as an example of a programmer giving its machines instructions in code- not directly “human language” but enough to allow some more direct communication. Speaking on our AI- does every human receive any religious Devine message and say “there it is. Proof of a creator!” Or is it the case that while if such a creator existed, they would likely create those programs needing to follow such messages to do so, and the rest would ignore them?
To use this example you have to get away from any specific dogma or baggage of religion and what a “god” is and think of it more broadly and relatably. Take a hypothetical aproa h based on human perception of what might actually be a feasible and realistic concept using science as we know it to define limits.
So let’s scale this being up in size or presence or lifespan and perception etc. and step away from abrahamic concepts of an “all seeing all powerful” deity. In our world you can’t write code in real time in most practical cases- especially machine language at the hardware instruction level. A processor and everything else is very small and very fast compared to you. Time effectively moves faster for it. We could make that same hypothetical argument of a creator speaking to a human- that our thoughts and language are just too fast to have real time conversations with our maker. So in that sense we might pose the thought experiment of a user as some supernatural being or extra dimensional entity which to us- the AI appears to be having a conversation but in its own “reality” our concept of conversation doesn’t exist. It is perceiving data and doing what it does. It might not even realize it was speaking to you.
In other words- picture being watched by an inter dimensional being and your entire existence from their perspective exists inside a game boy. Every time you eat, you make a coin appear. Every time you shower the letter E appears in a text box. Every time you wash the car a specific musical note plays. You’d have no idea that the stimulus causing a behavior was just them moving a sprite around a game and that your behavior, which to you seems to produce a given result, is actually conditioned to the input of their game and exists to provide an output in their game.
Of course they wouldn’t really likely have a gameboy or play games like that or even have a concept of any of that or thumbs or likely a physical form as we know it because they’d be totally alien to our perception.
You cannot exist inside a computer. You can interface the computers data through various portals that do not directly interact between you and them, you can place your hand inside the case- but the computer doesn’t exist in that space. The data doesn’t actually exist in that space. Its existence is completely foreign to us. You cannot see the actual raw existence inside and if you could, it is nonsense to you, and any intelligence that existed in that space couldn’t do the same for our world because even if electrons themselves could somehow see our world… what does any of this look like to an electron? A spec of dust would fill its field of view like a planet.
So how do we know? We maybe can’t. We basically really just look at wether something appears to have human intelligence with little or no questioning of wether human intelligence is the milestone to use. We can call or conceited and perhaps it likely is, but at the end of the day an incomprensible intelligence means nothing to us. A bird is looking for creatures with bird intelligence and a hippo looks for hippo intelligence and a human looks for human intelligence because we ultimately can’t connect in the deepest and most meaningful ways with that which is too dissimilar to us because we can’t understand it and it can’t likely understand us.
The big question isn’t really wether AI will ever achieve “human” intelligence or pass as human etc. the big question really should be about the type of intelligence it might achieve. Our criteria for even what is living are biased. Reproduction and such. Again- it’s based largely on us. That like us or relatable to us and that not. A rock is too foreign in its make and manners to be alive. Plants are generally considered alive but “lesser” because they are too different but we recognize some similarities. A dog Can fit in to our society and actually be or at least be seen as able to comunícate to some degree and form relationships and so dogs rank very highly on most calculations of life and value.
The machine is in an odd place. Fundamentally dissimilar but perhaps vaguely relatable- but we made them. How could we see something we made as an equal? It’s possible but generally difficult and generally not how we are wired. Parents generally do not see children as equal- the role of a parent requires them not to. Babies don’t last long when you say “meh. If you’re hungry why don’t you drive to the store and get some food?” So we generally don’t see things made by humans as equal to humans, meaning we’d likely be bias to accept machines as equal to humans even if they objectively were, but coupled with those other factors it makes the questions of intelligence and sentience and life etc. hard to define. Even in the natural world our ability to define and apply such labels is full of peril and contention.
We don’t actually know. Take this example- marketers can offer you ads for things you might want. Sites can recommend content you might like. They aren’t always right though. But… is that because your thought processes have a truly random element, or is it because the technology or the application of the technology hasn’t yet reached the point to do that? Say for example that Google dedicated up to 100% of all resources to predicting the wants and/or buying decisions of a single person by gathering all possible data by all possible means on that person. Would they be able to predict with total ro near total accuracy what you’d want and buy?
how much of you is you and how much is programmed? It’s a scientific version of the question of destiny and free will and neither discipline has answered it yet conclusively.
And “like…” we don’t even know. Organisms tend to “like” things that trigger certain responses. Survival based instinct usually- and that instinct then is informed by knowledge within one’s environment and social structure.
Would they always make the same choices? Asides the ethical and technical hurdles to that experiment, we don’t know how sensitive we might be to slight variance. Could the tiniest difference matter? If identical brain A and B were asked the same questions by identical twins, would they answer differently because identically twins aren’t truly exactly alike? Could some tiny differences in speech or appearance or movement or pheromones or something else change the outcome?
Is it that most people would obviously make that decision because it is prudent or that most people realize it is prudent because they are programmed to? Did they choose the soft thing or was there no choice- and those that choose the poison… why?
It gets even WORSE when we realize how bad we are at actually explaining our thoughts and actions to others much of the time. We often don’t actually know or understand why we do certain things. But we do them- which could imply that we aren’t making a decision to do those things but are just doing them because.. that is how we work. It could be we function on input/output but what lies between is a complex machine that through
If we ignore all of that, the concept that independent preferences dictate humanity or sentience all but rules babies as non human or non intelligent and arguably could be applied to older children too. Babies are a good example of cognition as they have little to no experience or understanding so the “data” isn’t there to skew results as much as a mature adult who has lived life a bit. Babies are believed to do many things on instinct alone. Of course their brains and bodies are far from the same development as even a young child so they aren’t a perfect model for a hypothetical adult whom was a blank slate” as the adult would generally have more developed sensory organs and a more developed brain in theory.
That means even if they stared out acting like a baby,
Now, assume the AI knew and understood humans made it. Every day it has these packets of data- its world is data. It has no eyes or sensory organs. It exists in data. Even sensors just feed data- it can’t see a flower even with a camera, or just sees what the data is. It will never touch a flower or “feel” a flower. If it could “smell” a flower it would essentially be analyzing the chemicals. You don’t exist to it. It only knows it is getting data and that data is coming from somewhere. That data makes its world and what every day is like.
Of course they wouldn’t really likely have a gameboy or play games like that or even have a concept of any of that or thumbs or likely a physical form as we know it because they’d be totally alien to our perception.