• Welcome to the new and improved Building Code Forum. We appreciate you being here and hope that you are getting the information that you need concerning all codes of the building trades. This is a free forum to the public due to the generosity of the Sawhorses, Corporate Supporters and Supporters who have upgraded their accounts. If you would like to have improved access to the forum please upgrade to Sawhorse by first logging in then clicking here: Upgrades

The inaccuracy of Chat GPT

jar546

Forum Coordinator
Joined
Oct 16, 2009
Messages
11,072
Location
Somewhere Too Hot & Humid
So last night, I spent a solid 1/2 hour going back and forth with ChatGPT because it could not execute a simple formula. It showed the correct formula but could not execute the formula. I simply asked it to give me the volume of a cylinder and provided the dimensions. I then asked it to convert the volume they chose to gallons and could not get consistent answers. Even when I pointed out the problem, ChatGPT acknowledged it made a mistake several times, and changed the way it answered but still answered incorrectly. This happened about 10 times before I gave up and realized it could not learn from its mistakes, even when the mistakes were pointed out and then acknowledged.

I asked for the volume of a cylinder that had a 72" diameter and was 72" high. It knew the formula but decided to convert the inches to feet which is OK. The problem was that they continued to miscalculate a correct formula.

Before I continue this thread, let's all agree that the volume of this cylinder is: 293,148.29 cubic inches using V=h*pi*r2

Do you agree with that number?

I am just confirming that ChatGPT may have its uses but the level of inaccurate information I've found so far is staggering, including building code questions.
 
Last edited:
Do you agree with that number?
I can't say that I agree with that number but I can say that google does.

Screen Shot 2023-03-28 at 9.34.37 AM.png

Now the question becomes how many jelly beans fit in a cubic inch....more or less.

Artificial intelligence relies on available information. Sifting through everything that's out there and tossing the wrong information takes genuine intelligence.

Chat bots are similar to autonomous automobiles in that it will be a new generation that fully embraces the technology.

Given the infancy of Chat GPT and the advent of quantum computing coming to a laptop computer near you....anything can happen....and will.

293,148.29 divided by .06544986910006544 (cubic inch volume of a jelly bean) = 4,478,974.4277731 beans. Must I wait for a quantum computer to arrive before I will be able to tell you how many are red jelly beans?
Can Chat GPT do that? Did I win a prize?
 
Last edited:
I can't say that I agree with that number but I can say that google does.

View attachment 10379

Now the question becomes how many jelly beans fit in a cubic inch....more or less.
That's right....it is almost Easter.....That time when we celebrate the death and resurrection of Jesus Christ by telling our children that a giant bunny left candy in the night....Amen...
 
Number correct.

I am a card carrying luddite, so I can't say I really understand the chatgpt/AI thing going on right now. It seems like this program is not thinking, rather it is a really intuitive search engine. I harken back to the idea that true AI is more than a program and logarithmic function that searches the internet or scans books for the answer to a question, which, in my opinion is not thinking, it is regurgitating. If that is what it is, I think the craze will pass as it can only be as accurate as the info it feeds on. If it is more than that I am afraid we have bigger things to worry about.

I wonder if it has been programmed with Asimov's 3 laws.....
 
Sifu, unfortunately I think it's here to stay and take over a lot of tasks. Considering it has passed graduate level tests (Wharton and U Minn law, as well as bar exams) in it's very early release, I don't think it's a passing fad. It beats a lot of mutual fund advisors. Call for tech help or to ask about an account or a travel reservation - pretty sure it can beat the average there already.

Best thing would be to invest some of savings in it if you've got 8-10 years.
 
It learns fast! I asked:
"What is volume of a 6 foot tall cylinder 6' in diameter?"

It (he, she, they?) replied:

"To calculate the volume of a cylinder, we use the formula:
V = πr^2h
where V is the volume, r is the radius, h is the height, and π is a mathematical constant approximately equal to 3.14159.
Since the diameter of the cylinder is 6 feet, the radius is half of that, or 3 feet.
Therefore, the volume of the cylinder can be calculated as:
V = πr^2h = π(3 ft)^2(6 ft)
V = 54π cubic feet
So the volume of the cylinder is approximately 169.65 cubic feet (rounded to two decimal places)."

Pretty close. I think I'll feed it all the I-Codes (but NOT the commentaries) and see how that goes.
 
It learns fast!
You are assuming that it learns. Try feeding it the same question multiple ways (tens or hundreds of ways) over some period of time. If you get the wrong answer at first, then you try to correct it, and then it consistently gives it the correct information only, maybe you can infer that it learned. But if it just always gives you the wrong information, say, 10% of the time, there is no learning going on.

Cheers, Wayne
 
You are assuming that it learns. Try feeding it the same question multiple ways (tens or hundreds of ways) over some period of time. If you get the wrong answer at first, then you try to correct it, and then it consistently gives it the correct information only, maybe you can infer that it learned. But if it just always gives you the wrong information, say, 10% of the time, there is no learning going on.

Cheers, Wayne
I get it. If man were meant to fly he'd have wings. And those horseless carriages are just a fad. And how's your Royal manual typewriter doing?
 
My son is working on his PhD in mechanical engineering. He spends a lot of time studying scientific papers that are very math intensive. Some of the authors use unusual techniques and skip important steps that make them hard to follow. He uses Chat GPT to help explain the methodology. Often, it will locate related papers that are easier to follow. This has saved him and others a lot of research time.
 
In trying to determine what good Chat GPT is I think that the field of inquiry will play a significant role. A subject with a tremendous amount of internet information might have more misinformation than an esoteric subject. However as a percentage, a topic with limited information might have greater inaccurate results.

It appears that the AI creators just tossed this stuff out in the open to see what happens.
 
Last edited:
By posting calculations and answers here in this forum, we are likely altering the information that chatGPT is using. I'd dare say that if we posted miscalculated answers here, chatGPT could pick them up and provide inaccurate information.

Really, and it was said above, chatGPT and similar are not AI. They do not think on their own. It is a glorified search engine that has optimizations that follow our (human kinds) word structure. Google for example, is a keyword based search engine. It focuses on the individual search words for identifying content. What chatGPT has been able to do is run a search, and then provide a single result that extracts information from one or more reference pages, with the results provided in common sentence structure.
 
To Chat GPT:
“Citing appropriate references in the National Building Code of Canada, 2015 edition, write a building inspector's report for a framing inspection that requires correction for an improperly spliced three-ply beam, and a steel column that requires a bearing plate to span the width of the beam.”

Building Inspector's Report

Location: [Insert Address] Date of Inspection: [Insert Date]

Observations: During the framing inspection, the following deficiencies were observed:

  1. Improperly Spliced Three-Ply Beam: According to section 9.23.3.3 of the National Building Code of Canada (NBCC) 2015 edition, "Where members are spliced, the splices shall be made in accordance with the manufacturer's recommendations or by one of the methods described in Sentences (4) to (7).” The three-ply beam in question was found to have been spliced in a manner that does not comply with the requirements outlined in the NBCC.
Correction: The splice must be corrected in accordance with the NBCC to ensure adequate structural capacity.

  1. Steel Column Requiring Bearing Plate: Section 9.23.3.7 of the NBCC 2015 states that “Where members are supported on a steel column, a bearing plate shall be provided at the base of the member to distribute the load over a sufficient area of the column to prevent local buckling of the column web.” The steel column in question was found to lack a bearing plate, and as a result, the load on the column is not being distributed adequately.
Correction: A bearing plate must be installed to span the width of the beam and distribute the load in accordance with the NBCC.

Conclusion: The observed deficiencies must be corrected to ensure compliance with the NBCC 2015 edition and to provide adequate structural capacity for the building. A follow-up inspection will be necessary to verify that the necessary corrections have been made.

Signed: [Insert Inspector's Signature]

Dang convincing, except
a) the incorrect splicing code is actually 9.23.8.3;
b) the requirement for a plate to span a beam width is 9.17.3.2(1).
c) There is no clause 9.23.3.7 in NBCC 2015.;
d) 9.23.3.3 reads Prevention of Splitting 1) Splitting of wood members shall be minimized by staggering the nails in the direction of the grain and by keeping nails well in from the edges.

But to the untrained eye …..holy crap.
 
  • Like
Reactions: ICE
I have a friend who is a private college administrator. On a recent assignment in a humanities class, about a third of the students obviously used Chat GPT to write their paper, and they were referred to her for disciplinary action. Quotation from ChatGPT (or any other source) without attribution in a college class paper is technically a form of plagiarism which can potentially lead to expulsion.

When she met with each student individually, she gave them a choice:
- The student could discuss their future at the college with her, OR
- She could ask ChatGPT right then and there to determine the appropriate disciplinary action for plagiarism at the college level.

None of them chose Chat GPT.
 
What little I have read of how AI is (or should be) used is that isn't a one-shot process. It works by initially sifting through huge masses of data to discern patterns, then culling out errors gradually improves its accuracy.

"Machine learning describes a subset of artificial intelligence in which a computing system uses algorithms to analyze and learn from data without human intervention to draw inferences from patterns and make predictions. The system may continue to learn and improve over time as it receives more data." (from Adobe’s Content Analysis FAQ page)

We're still a ways from having a generalized AI that isn't directed at a specific task or group of tasks. ChatGPT might be good to write term papers but I don't think I would trust it to write a specification or building code.

A computer is one of the dumbest thing built; it can only add ones and zeros. Being able to add these ones and zeros billions of times a second, and having programmers devise algorithms to use these ones and zeros is what makes a computer useful.
 
When she met with each student individually, she gave them a choice:
- The student could discuss their future at the college with her, OR
- She could ask ChatGPT right then and there to determine the appropriate disciplinary action for plagiarism at the college level.

None of them chose Chat GPT.
As the example that Inspector Gadget provided points out, the result that Chat GPT produces is a tad bit too polished. It just looks like something that no inspector I know, would write. So teachers see Chat GPT stuff and think. "Ya Billy didn't do this.....and if he did, I'm a better teacher than I realized."
 
So today I ran the same question to ChatGPT:

what is the volume in cubic inches of a cylinder with a flat top and bottom that is 72" high with a 72" diameter?

It answered incorrectly, I told it that the answer was wrong and then gave it the correct answer then I asked the same question again and it got it right so apparently it can learn.

I then asked it a second time and it was still correct.

1680114942684.png
1680114986236.png
1680115017593.png
 
So today I ran the same question to ChatGPT:

what is the volume in cubic inches of a cylinder with a flat top and bottom that is 72" high with a 72" diameter?

It answered incorrectly, I told it that the answer was wrong and then gave it the correct answer then I asked the same question again and it got it right so apparently it can learn.

I then asked it a second time and it was still correct.

View attachment 10387
View attachment 10388
View attachment 10389
The bot said that it has no memory, emotions or consciousness. It is aware that you have all of that, and more, so it apologized...twice. This thing is scarier than we know...starting with the fact that it knows what those attributes are and it understands them.

I'm curious about what pronouns we should be using to address the bot.
 
Top