It will NEVER consistently give you only the value in the response. It will always eventually add in some introductory text like it’s talking to a human. No matter how many times I tried to get it to just give me back the answer alone, it never consistently did.
ChatGPT is terrible with numbers. It can’t count, do math, none of that. So asking it to do byte math is asking for a world of hurt.
have you ever wanted your calculator to be able to be wrong like a human?
Like, not just calculating the wrong answer or returning an error, i mean outright brainfart and just giving a nonsense answer
I know a guy who was working on something like this, they just had the call to the model loop until the response met whatever criteria the code needed (e.g. one single number, a specifically formatted table, viable code, etc) or exit after a number of failed attempts. That seemed to work pretty well, it might mess up from time to time but it’s unlikely to (with the right prompt) do so repeatedly when asked again.
I’m currently a guy working on something like this ! It’s even simpler as you can have structured output on the chatgpt API. Basically you give it a JSON schema and it’s guaranteed to respond with JSON that validates against that schema. Spent a couple weeks hacking at it and i’m positively impressed, I have had clean JSON 100% of the time, and the data extraction is pretty reliable too.
The tooling is actually reaching a sweet spot right now where it makes sense to integrate LLMs in production code (if the use case makes sense and you haven’t just shoe-horned it in for the hype).
Honestly the use case i’m working on is pretty mind blowing. User records an unstructured voice note like “i am out of item 12, also prices of items 13 & 15 is down to 4 dollars 99, also shipping for all items above 1kg is now 3 dollars 99” and the LLM will search the database for items >1kg (using tool calling) then generate a JSON representing the changes to be made. We use that JSON to make a simple UI where the user can review the changes - then voilà it’s sent to the backend which persists the change in database. In the ideal case the user never even pulls up the virtual keyboard on their phone, it’s just “talk, check, click, done”.
Human in the loop systems with LLMs really nicely deal with a lot of their problems. Very cool! Do you have specific change “types” that the system is able to propose? I guess restricting the response to the right types is covered by your JSON schema?
That’s fucking badass thanks for the pointer this might prove useful. In the structured output department i’m hearing great things about dotTxt’s outlines which lets you constrain output according to a regex, but i haven’t tested it yet.
That’s a good approach. I think for my use case the struggle was trying to not use a ton of tokens (upper management was being stingy on that front). I kept trying to push to make it more robust but you know how those things go. Axed ahead of their time or zombified.
Observation 1: ChatGPT is designed to provide context for responses to enhance clarity for human users. Requests for answers without accompanying text may result in inconsistent behavior due to its conversational model. It is not optimized for providing pure data outputs without context.
Observation 2: ChatGPT is not inherently equipped to perform complex mathematical operations with high reliability. Numerical inaccuracies or rounding errors may occur due to the model’s structure. While capable of basic arithmetic, it is not a specialized tool for precise calculations, particularly in domains like byte math, where accuracy is critical.
For 1, that’s why you say “Format your answer in this exact sentence: The number of bytes required (rounded up) is exactly # bytes., where # is the number of bytes.” And then regex for that sentence. What could go wrong?
Also, it can do math somewhat consistently if you let it show its work, but I still wouldn’t rely on it as a cog in code execution. It’s not nearly reliable enough for that.
From my experience with ChatGPT:
If this isn’t joke code, that is scary.
I refuse to believe you are not certain this is a joke
I know it is, but I’ve also seen people try to use ChatGPT for similar things as a serious endeavor.
Friendly reminder that CalcGPT exists
Neat! Never seen this one before.
Where’s my 1 million dollars?
how is it different from a calculator or say a Python REPL? i’m asking b/c i’m too old to try out young folks inefficiently engineered “solutions”.
You input some text, chatGPT guesses the answer using the linear algebra that powers LLMs
The project was made as a satire of companies putting AI into everything
have you ever wanted your calculator to be able to be wrong like a human?
Like, not just calculating the wrong answer or returning an error, i mean outright brainfart and just giving a nonsense answer
I know a guy who was working on something like this, they just had the call to the model loop until the response met whatever criteria the code needed (e.g. one single number, a specifically formatted table, viable code, etc) or exit after a number of failed attempts. That seemed to work pretty well, it might mess up from time to time but it’s unlikely to (with the right prompt) do so repeatedly when asked again.
I’m currently a guy working on something like this ! It’s even simpler as you can have structured output on the chatgpt API. Basically you give it a JSON schema and it’s guaranteed to respond with JSON that validates against that schema. Spent a couple weeks hacking at it and i’m positively impressed, I have had clean JSON 100% of the time, and the data extraction is pretty reliable too.
The tooling is actually reaching a sweet spot right now where it makes sense to integrate LLMs in production code (if the use case makes sense and you haven’t just shoe-horned it in for the hype).
Fair play to Open AI - I still think LLMs are overhyped, but they’re moving things along constantly in impressive ways.
Honestly the use case i’m working on is pretty mind blowing. User records an unstructured voice note like “i am out of item 12, also prices of items 13 & 15 is down to 4 dollars 99, also shipping for all items above 1kg is now 3 dollars 99” and the LLM will search the database for items >1kg (using tool calling) then generate a JSON representing the changes to be made. We use that JSON to make a simple UI where the user can review the changes - then voilà it’s sent to the backend which persists the change in database. In the ideal case the user never even pulls up the virtual keyboard on their phone, it’s just “talk, check, click, done”.
Human in the loop systems with LLMs really nicely deal with a lot of their problems. Very cool! Do you have specific change “types” that the system is able to propose? I guess restricting the response to the right types is covered by your JSON schema?
This works well too, and with many different models: https://github.com/guardrails-ai/guardrails
That’s fucking badass thanks for the pointer this might prove useful. In the structured output department i’m hearing great things about dotTxt’s outlines which lets you constrain output according to a regex, but i haven’t tested it yet.
That’s a good approach. I think for my use case the struggle was trying to not use a ton of tokens (upper management was being stingy on that front). I kept trying to push to make it more robust but you know how those things go. Axed ahead of their time or zombified.
Response:
Observation 1: ChatGPT is designed to provide context for responses to enhance clarity for human users. Requests for answers without accompanying text may result in inconsistent behavior due to its conversational model. It is not optimized for providing pure data outputs without context.
Observation 2: ChatGPT is not inherently equipped to perform complex mathematical operations with high reliability. Numerical inaccuracies or rounding errors may occur due to the model’s structure. While capable of basic arithmetic, it is not a specialized tool for precise calculations, particularly in domains like byte math, where accuracy is critical.
Statement acknowledged.
For 1, that’s why you say “Format your answer in this exact sentence:
The number of bytes required (rounded up) is exactly # bytes.
, where # is the number of bytes.” And then regex for that sentence. What could go wrong?Also, it can do math somewhat consistently if you let it show its work, but I still wouldn’t rely on it as a cog in code execution. It’s not nearly reliable enough for that.