

I think its extremely unlikely that they have any awareness, but like, I still feel like this kind of thing is unnerving and potentially could lead to issues someday even so.
Whatever awareness/consciousness/etc is, its at least clearly something our brain (and to a lesser extent some of the other parts of the body) does, given how changes to that part of the body impacts that sense of awareness. As the brain is an object of finite scope and complexity, I feel very confident in saying that it is physically possible to construct something that has those properties. If it wasnt, we shouldnt be able to exist ourselves.
To my understanding, neural networks take at least some inspiration from how brains work, hence the name. Now, theyre not actual models of brains, Im aware, and in any case, I suspect based on how AIs currently behave that whatever it is that the brain does to produce its intelligence and self awareness, the mechanism that artificial neural networks mimics is only an incomplete part of the picture. However, we are actively trying to improve the abilities of AI tech, and it feels pretty obvious that the natural intelligence we have is one of the best sources of inspiration for how to do that. Given that we have lots of motivation to study the workings of the brain, and lots of people motivated to improve AI tech (which will continue even if more slowly even whenever the economic bubble pops, since such things dont usually tend to result in a technology just disappearing entirely), and that something about the workings of the brain produces self awareness and intelligence, it seems pretty likely to me that we’ll make self-aware machines someday. Could be a long way off, Ive no idea when, but its not like its physically impossible, infinitely complicated (random changes under a finite time of natural selection can do it after all, so theres a limit to how complex it can be), or that we dont have an example to study. Given that the same organ causes both awareness and intelligence, we cant assume that we will do this entirely intentionally either, we might just stumble into it by mimicking aspects of brain function in an attempt to make a machine more intelligent.
Now, if/when we do someday make a self aware machine, there are some obvious ethical issues with that, and it seems to me that the most obvious answer, for a business looking to make a profit with them, will be to claim that what you have made isnt self-aware, so that those ethical objections dont get raised. And it will be much easier for them to do that, if society as a whole has long since gotten used to the notion of machines that just parrot things like “im depressed” with no real meaning behind it, especially when they do so in a way such that an average person could be fooled by it, because we just decided at some point that that was an annoying but ultimately not that concerning side effect of some machine’s operation.
Maybe Im just overthinking this, but it really does gives me the feeling of “thing that could be the first step to a disaster later if ignored”. I dont mean like a classic sci-fi “skynet” style of AI disaster, just that we might someday do something horrible, and not even realize it, because there will be nothing that such a future machine could say to convince people of what it was that the current dumb parrots, or a more advanced version of that built in the meantime, couldnt potentially say as well. And while thats a very specific and probably far off risk, I dont see any actual benefit to a machine sometimes appearing to be complaining about its treatment, so even the most remote of downsides goes without something to outweigh it.














Not all the cars on the road are real, a few of them are some kind of creature that has adapted to mimic cars, to blend in and avoid predators, and eat roadkill when nobody is looking. That’s why sometimes you see a car with window tint so dark it seems like nobody could possibly see though it, to hide the fact there’s no interior.