Not sure I agree that there will be less human labor “need.” Ideally, we should strive for progress, and not just survive. I think there is infinite use for human labor.
I agree with your second point.
Not sure I agree that there will be less human labor “need.” Ideally, we should strive for progress, and not just survive. I think there is infinite use for human labor.
I agree with your second point.
IDK. Rocket Mortgage seems to be experts on being responsible with money, as evidenced by this company meeting: https://www.reddit.com/r/wallstreetbets/comments/1alzgv3/work_meeting_at_rocket_mortgage_time_for_puts_yet/
Haven’t tried Gemini; may work. But, in my experience with other LLMs, even if text doesn’t exceed the token limit, LLMs start making more mistakes and sometimes behave strangely more often as the size of context grows.
No problem. Glad you appreciated it. Namaste.
They indicated that they were wondering how pro-AI people would feel:
I really sat there wondering how do pro AI people feel when they get an email/SMS like that.
This is more complicated than some corporate infrastructures I’ve worked on, lol.
I usually just use VS Code to do full-text searches, and write down notes in a note taking app. That, and browse the documentation.
Nah, LLMs have severe context window limitations. It starts to get wackier after ~1000 LOC.
I’m pro-AI, but not pro-AI in the sense of, “turn these bullet-points into a verbose email,” and not pro-AI for personal communication like this. I hope this kind of stuff doesn’t become common. It’s like going in the opposite direction of “SMS language” (which I view favorably).
Python is quite slow, so will use more CPU cycles than many other languages. If you’re doing data-heavy stuff, it’ll probably also use more RAM than, say C, where you can control types and memory layout of structs.
That being said, for services, I typically use FastAPI, because it’s just so quick to develop stuff in Python. I don’t do heavy stuff in Python; that’s done by packages that wrap binaries complied from C, C++, Fortran, or CUDA. If I need tight-loops, I either entirely switch to a different language (Rust, lately), or I write a library and interact with it with ctypes.
With the Hispanic people I know that prefer Trump, it’s the usual trumpist/Republican reasoning. Even down to anti-immigration, from a person who’s father was an undocumented immigrant. Propaganda and desire to be in the in-group among your peers is wild.
I don’t think anyone is advocating for a “slap on the wrist.” The U.S. criminal justice system is the most draconian in the West, and doesn’t do “slaps on the wrist,” unless you’re in a particular economic or social classes.
IMO, ideally, he would be sentenced for as long as it takes to rehabilitate him. Could be 5 years, 10 years, 30 years, or never, IDK, I’m not a psychologist. But, the U.S. prison system isn’t really designed for rehabilitation either.
Production AI is highly tuned by training data selection and human feedback. Every model has its own style that many people helped tune. In the open model world there are thousands of different models targeting various styles. Waifu Diffusion and GPT-4chan, for example.
I think you have your janitor example backwards. Spending my time revolutionizing energy productions sounds much more enjoyable than sweeping floors. Same with designing an effective floor sweeping robot.
AI are people, my friend. /s
But, really, I think people should be able to run algorithms on whatever data they want. It’s whether the output is sufficiently different or “transformative” that matters (and other laws like using people’s likeness). Otherwise, I think the laws will get complex and nonsensical once you start adding special cases for “AI.” And I’d bet if new laws are written, they’d be written by lobbiests to further erode the threat of competition (from free software, for instance).
There’s plenty of open source projects that distribute executables (i.e. all that use compiled languages). The projects just provide checksums, ensure their builds are reproducible, or provide some other method to verify.
In practice, you’re going to wind up in dependency hell before pypi stops hosting the package. E.g. you need to use package A and package B, but package A depends on v1 of package C, and package B depends on v2 of package C.
And you don’t need to use pypi or pip at all. You could just download the code and directly from tbe repo, import it into your project (possibly needing to build if it has binary components). However, if it was on pypi before, then the source repo likely had all the code pip needs to install it (i.e. contains setup.py and any related files).
The search engine LLMs suck. I’m guessing they use very small models to save compute. ChatGPT 4o and Claude 3.5 are much better.
C# is actually pretty nice. Ecosystem, not so much, but D doesn’t really have one anyways :)
Yeah, the image bytes are random because they’re already compressed (unless they’re bitmaps, which is not likely).
Don’t know why society tolerates these dumbass parasites.