• 12 Posts
  • 676 Comments
Joined 3 years ago
cake
Cake day: June 5th, 2023

help-circle
  • I just think that dying is unethical in general and represents a maximal state of suffering (well, more a minimum of non-suffering, since you have no capacity to experience anything when you dont exist anymore, not maximal suffering in the “hell” sense. I know many or most people would disagree with me on that point, but its not something I feel like spelling out my reasons for at the moment.) I also do not believe in the concept of deserved suffering (that is to say, in my view suffering as punishment only has value in its capacity to rewire a person’s future behavior, and that once you have achieved that so as to cause them to live without continuing whatever harms have led to the punishment, anything more is wrong, no matter what they’ve done, even if they were literally the most heinous person of all time). If you’re actually in a position to execute them, then youre in a position to take their money and power too, pointing out that they rarely face justice isnt actually relevant to this, because if your legal system is too corrupted to hand out a jail sentence and make it stick, its also going to be too corrupted to hand out a death sentence and go through with it. These people arent wealthy because they’re inherently good at making money, they’re wealthy because wealth begets wealth and they either started with some or lucked out somewhere or have relations that have it, so if you both take their wealth and the wealth of their friends and relatives, how are they going to get it back?


  • The AI training rights thing makes me wonder: presumably, having to grant them permission in the ToS implies that without being given permission, there is at least some situation in which they would not be legally allowed to train an AI on some data? If that is the case, how should a case be legally handled where a user that does not have permission to do so from whoever legally owns said data, posts it and it gets used in the AI training? Presumably if this came up before AI training, all that would be needed to remove the content once the legal owner found out would be some takedown process and the relevant posts would be removed. But, as far am Im aware, you cant just remove something from an AI’s training data and have it “unlearn” that thing as if it had never been included.



  • Interesting, I wasn’t aware that Wales was historically disunited like that, but I suppose that other than the location, having a different language and one of the more interesting flags, I dont know a ton about it. I suppose I just assumed that it was a singular kingdom before being invaded by the English at some point.



  • Depends on how literally you mean it, in general, those most likely to say it wont think that humans are literally designed not to die and only do so because someone made a mistake, but more that humans might be redesigned or modified not to (or at least not from biological aging). Not a hard to find sentiment if you hang out in spaces with transhumanists, but I find the ones that overlap with AI bros, that tend to have an attitude like “this will totally happen in my lifetime and with no effort because the AI singularity is going to come and give us everything in a few years” impossible to talk to, because all too often they will cite even the tiniest listed improvement in any AI system as proof that literally everything possible or impossible is about to happen and then insist you arent paying attention when you give them skeptcism.



  • Emotions aren’t entirely rational with a clearly thought out process to justify why one should feel them. In any case, its common enough for people to assign the general actions of people within a group to the group as a whole (which isnt really fair or a reflection of reality, but can be pragmatic at times and requires less thought and information than judging on an individual basis, so it makes sense that people’s brains are wired up to do it even if its not always desirable). This can get extended to the groups one is a part of oneself, to include those whose membership one did not choose. And the US at the moment has even worse than typical leadership, has a great deal of power for that leadership to abuse, still has free enough media for people within it to stand a good chance of knowing about at least some of it, and if youre here on lemmy youre probably running into people with a somewhat higher than normal awareness of a lot of the historical abuses previous Americans have perpetrated just because it leans left and anti-establishment and those things get talked about a lot in such spaces.


  • What help can a modern AI really give you in making a nuke though? It could give you broad-strokes information about how they work in general, but that information isnt really a secret anyway, nukes are a technology that is over three quarters of a century old, you can just look them up and find information about how they work. For anyone with any risk of being able to build one, obtaining that information isnt realistically a problem.

    You could perhaps ask the thing for more specific information about how to design all the relevant components, but then you have to deal with the issue that AIs tend to be wrong a lot of the time, and in any case, if you have the resources to seriously have a chance at building such a thing, is hiring, recruiting, or acquiring training for some actual nuclear physicists or engineers really going to be your limiting factor, such that getting a bot to do their work could help you?

    Id image the hard part to be actually getting or refining the nuclear material of the needed enrichment level, testing the thing, and doing all of this without being found out. ChatGPT or whatever cant exactly go out and buy uranium or build a secret enrichment facility for you, no matter how much you might jailbreak its safeguards on the matter.


  • Its a statistical effect, not infantilization. Suppose they are just lazy. What then? Do you expect that if enough people realize this and call them out on it, that people that didnt vote will suddenly realize the error of their ways and go do it next time? If they are, but you treat them as if any existing difficulty to voting was the cause and work to make it easier instead of casting blame, what harm would be done? If I “stop infantilizing lazy voters” as you think it, what benefit is achieved?

    It seems to me that if what is necessary to achieve a better outcome is for people that tend to stay home to vote instead, then it makes sense to do whatever it is that will make them more likely to do it, whether or not they seem to deserve it or not. And people rarely do what you wish them to do after you assign blame to them for something, regardless of how true that blame is. Assigning blame, if you can back it up with appropriate consequences, can help change the behavior of specific individuals. But it virtually never is effective at changing large and vague groups whose members you do not even know. To do that, you have to create systems that push people into a desirable behavior rather than leaving it up to their personal responsibility that has already shown, by the fact that the end you want isnt already happening, to be ineffective.





  • You misunderstand, I am not saying “make sure he spends it responsibly”. Nobody has has “made” him do this at all, and I didn’t advocate for a policy of doing so. What I’m saying is that I don’t think this particular use is worthy of condemnation the way his other actions are, because in the long run I think that this specific thing will end up benefiting people other than him no matter if he intends for that to happen or not (even if the American healthcare system prevents access, which I’m not confident it will do completely, not every country has that system, and it’s statistically improbable that the US will have it forever, and research results are both durable and cross borders). That sentiment isn’t saying that it excuses his wealth, just that I think people are seeing only the negatives in this merely because of the association with Altman’s name and ignoring the potential benefits out of cynicism. The concept is just as valid with him funding it as it would be had he been condemning it instead.


  • The response to something beneficial being only available to the rich shouldn’t be to avoid developing that thing, it should be to make it available to everyone. The failures of the US healthcare and economic systems don’t suddenly make developing new medical techniques a bad thing. Human augmentation is another issue from curing genetic disease, though I’d personally argue that wouldn’t be a bad cause either, with the same caveat about it availability. It at least has more potential to improve somebody’s life somewhere down the line than just buying a yacht with his ill gotten gains or some other useless rich person toy would.