It’s good to be cautious. I agree. Indeed, as I have deep expertise in programming, I recognize when it is over-complicating things or outright hallucinating. And I’ll double check output when it’s important.
But that doesn’t discount the incredible usefulness of these tools. I’ve noticed a 20-30% productivity boost in my work, and googling for things now feels like a step back to the dark ages. Stack Overflow laid off 28 percent of its staff.
Even as just a sounding board, mentor, coach, and idea generator, the tools are so helpful. And that aspect doesn’t require 100% accuracy. And if you think about it, it’s not like pre-LLM documentation on the web or chatting with colleagues was ever flawless. We’ve all run into a post online that was confidently wrong, or a coworker that stubbornly insisted on something stupid.
Here’s another thing to consider: while the tools have flaws and limits, this is the worst they’ll ever be going forward. There’s constant new improvements, like like tree of thought prompting and multi-modality. Just the other day, I took a photo of a wire mess near my home router with GPT Vision, and I had the LLM suggest cable-neatening products and methods (I’ve always struggled with cable management).
In fact, the biggest limit I’ve noticed is simply people’s lack of creativity in using the tools (or willingness to use them), not the tools themselves.
They’re here to stay. ChatGPT became the most-used / most-quickly adopted product of all time for a reason. Those who are willing to work with them and learn about them (both their strengths and weaknesses) will benefit, and those unwilling to do so, and who are too-dismissive, will fall behind.
Oh, yes, they absolutely have their uses. But at least at the moment they don’t live up to the hype. And until they get their facts straight, they are mostly useless for my job. I do keep warning people because I do see a worrying trend of people assuming ChatGPT actually knows what it’s talking about. And that is not the case.
It’s good to be cautious. I agree. Indeed, as I have deep expertise in programming, I recognize when it is over-complicating things or outright hallucinating. And I’ll double check output when it’s important.
But that doesn’t discount the incredible usefulness of these tools. I’ve noticed a 20-30% productivity boost in my work, and googling for things now feels like a step back to the dark ages. Stack Overflow laid off 28 percent of its staff.
Even as just a sounding board, mentor, coach, and idea generator, the tools are so helpful. And that aspect doesn’t require 100% accuracy. And if you think about it, it’s not like pre-LLM documentation on the web or chatting with colleagues was ever flawless. We’ve all run into a post online that was confidently wrong, or a coworker that stubbornly insisted on something stupid.
Here’s another thing to consider: while the tools have flaws and limits, this is the worst they’ll ever be going forward. There’s constant new improvements, like like tree of thought prompting and multi-modality. Just the other day, I took a photo of a wire mess near my home router with GPT Vision, and I had the LLM suggest cable-neatening products and methods (I’ve always struggled with cable management).
In fact, the biggest limit I’ve noticed is simply people’s lack of creativity in using the tools (or willingness to use them), not the tools themselves.
They’re here to stay. ChatGPT became the most-used / most-quickly adopted product of all time for a reason. Those who are willing to work with them and learn about them (both their strengths and weaknesses) will benefit, and those unwilling to do so, and who are too-dismissive, will fall behind.
Oh, yes, they absolutely have their uses. But at least at the moment they don’t live up to the hype. And until they get their facts straight, they are mostly useless for my job. I do keep warning people because I do see a worrying trend of people assuming ChatGPT actually knows what it’s talking about. And that is not the case.