help with learning a new programming framework/language
questions about basic home repair and issue prevention
when I feel anxious or vaguely depressed but can’t put my finger on exactly why or what’s causing it, it’s a great sounding board
nutrition questions about specific foods or meals
things to add to a simple meal to make it more interesting / complete
weighing pros and cons for an important decision, getting ideas I may have not thought of
conducting a mock interview for me when I give it a job description and my resume
random *nix commandline recipes where I don’t want to spend 10 minutes googling and inevitably landing on SEO garbage blogs and just need a quick snippet
I am not yet tired of pointing out that you cannot rely on a LLM for facts. LLMs like ChatGPT have no concept of what a fact is. Unless you already know something about a topic you won’t even notice when they are making stuff up.
It’s good to be cautious. I agree. Indeed, as I have deep expertise in programming, I recognize when it is over-complicating things or outright hallucinating. And I’ll double check output when it’s important.
But that doesn’t discount the incredible usefulness of these tools. I’ve noticed a 20-30% productivity boost in my work, and googling for things now feels like a step back to the dark ages. Stack Overflow laid off 28 percent of its staff.
Even as just a sounding board, mentor, coach, and idea generator, the tools are so helpful. And that aspect doesn’t require 100% accuracy. And if you think about it, it’s not like pre-LLM documentation on the web or chatting with colleagues was ever flawless. We’ve all run into a post online that was confidently wrong, or a coworker that stubbornly insisted on something stupid.
Here’s another thing to consider: while the tools have flaws and limits, this is the worst they’ll ever be going forward. There’s constant new improvements, like like tree of thought prompting and multi-modality. Just the other day, I took a photo of a wire mess near my home router with GPT Vision, and I had the LLM suggest cable-neatening products and methods (I’ve always struggled with cable management).
In fact, the biggest limit I’ve noticed is simply people’s lack of creativity in using the tools (or willingness to use them), not the tools themselves.
They’re here to stay. ChatGPT became the most-used / most-quickly adopted product of all time for a reason. Those who are willing to work with them and learn about them (both their strengths and weaknesses) will benefit, and those unwilling to do so, and who are too-dismissive, will fall behind.
Oh, yes, they absolutely have their uses. But at least at the moment they don’t live up to the hype. And until they get their facts straight, they are mostly useless for my job. I do keep warning people because I do see a worrying trend of people assuming ChatGPT actually knows what it’s talking about. And that is not the case.
I use it for soooo many things:
I am not yet tired of pointing out that you cannot rely on a LLM for facts. LLMs like ChatGPT have no concept of what a fact is. Unless you already know something about a topic you won’t even notice when they are making stuff up.
It’s good to be cautious. I agree. Indeed, as I have deep expertise in programming, I recognize when it is over-complicating things or outright hallucinating. And I’ll double check output when it’s important.
But that doesn’t discount the incredible usefulness of these tools. I’ve noticed a 20-30% productivity boost in my work, and googling for things now feels like a step back to the dark ages. Stack Overflow laid off 28 percent of its staff.
Even as just a sounding board, mentor, coach, and idea generator, the tools are so helpful. And that aspect doesn’t require 100% accuracy. And if you think about it, it’s not like pre-LLM documentation on the web or chatting with colleagues was ever flawless. We’ve all run into a post online that was confidently wrong, or a coworker that stubbornly insisted on something stupid.
Here’s another thing to consider: while the tools have flaws and limits, this is the worst they’ll ever be going forward. There’s constant new improvements, like like tree of thought prompting and multi-modality. Just the other day, I took a photo of a wire mess near my home router with GPT Vision, and I had the LLM suggest cable-neatening products and methods (I’ve always struggled with cable management).
In fact, the biggest limit I’ve noticed is simply people’s lack of creativity in using the tools (or willingness to use them), not the tools themselves.
They’re here to stay. ChatGPT became the most-used / most-quickly adopted product of all time for a reason. Those who are willing to work with them and learn about them (both their strengths and weaknesses) will benefit, and those unwilling to do so, and who are too-dismissive, will fall behind.
Oh, yes, they absolutely have their uses. But at least at the moment they don’t live up to the hype. And until they get their facts straight, they are mostly useless for my job. I do keep warning people because I do see a worrying trend of people assuming ChatGPT actually knows what it’s talking about. And that is not the case.
Some do cite their sources
Yeah, and the sources are sometimes made up