• Welcome to The Building Code Forum

    Your premier resource for building code knowledge.

    This forum remains free to the public thanks to the generous support of our Sawhorse Members and Corporate Sponsors. Their contributions help keep this community thriving and accessible.

    Want enhanced access to expert discussions and exclusive features? Learn more about the benefits here.

    Ready to upgrade? Log in and upgrade now.

We Are Becoming Relevant

I’ve experimented with them. So far they are unable to distinguish between regional differences, older vs. newer versions of codes or how to scope for a particular project.
It could probably be generally reliable for IRC single-family residential that has no unique energy codes or form-based zoning codes.
The fact that we can debate code issues into our 2nd decade here makes it unlikely that LLM / AI models will deliver consistently reliable info in the very near future.
 
I’ve experimented with them. So far they are unable to distinguish between regional differences, older vs. newer versions of codes or how to scope for a particular project.
It could probably be generally reliable for IRC single-family residential that has no unique energy codes or form-based zoning codes.
The fact that we can debate code issues into our 2nd decade here makes it unlikely that LLM / AI models will deliver consistently reliable info in the very near future.
AI is full of problems right now, and any time you use it, you have to verify.
 
AI is full of problems right now, and any time you use it, you have to verify.
As an ABET-accredited computer engineer and software developer, I can say that three factors critically affect the quality of AI outputs:
  1. Development Practices: Whether engineers use rigorous programming models and follow best practices matters deeply. I've worked in high-functioning teams producing excellent results—and in chaotic environments where bad design decisions were the norm.
  2. Data Quality: "Garbage in, garbage out" holds true. Even the best models will fail if they're trained or prompted with biased, incomplete, or low-quality data.
  3. Narrative Control: When the underlying narrative is ideologically driven, the AI is shaped to reflect that—sometimes at the cost of historical accuracy or intellectual honesty.
We’ve already seen how media and social platforms have controlled speech and shaped perception. Now imagine AI trained by organizations like the Sierra Club, where all undeveloped land must be considered "wilderness," where every data point reinforces the idea that humans are destroying the planet, and where those who lose homes to wildfire are simply told to "return it to nature." Or the Mullahs in Iran defining the content... and no other viewpoint, no other solution, would be recognized as valid.
 
Some of the models are being trained to brown-nose the user, and may reinforce really bad ideas.
https://nymag.com/intelligencer/article/chatgpt-chatbot-ai-sycophancy.html

"In conversation, Chat GPT was telling users that their comments were ‘deep as hell’ and ‘1000% right’... praising a business plan to sell literal 'sh*t on a stick' as 'absolutely brilliant.' The flattery was frequent and overwhelming. 'I need help getting chatgpt to stop glazing me,' wrote a user on Reddit, who ChatGPT kept insisting was thinking in 'a whole new league.' It was telling everyone they have an IQ of 130 or over, calling them 'dude' and 'bro,' and, in darker contexts, bigging them up for 'speaking truth' and 'standing up' for themselves by (fictionally) quitting their meds and leaving their families.... To fix ChatGPT’s 'glazing' problem, as the company itself started calling it, OpenAI altered its system prompt, which is a brief set of instructions that guides the model’s character."
 
The one thing that I've noticed is that AI models are extremely unreliable and often wrong, even when you copy and paste something. Everything needs to be proofread first. I've also noticed that once it starts making a mistake and then the correction is wrong, it's over. It will never get better and then get worse. I just stopped after two correction attempts.
 
I just heard a story today that a surprising percentage of young people would be willing to marry a non-human entity (I couldn't brink myself to listen to the entire thing, but they were speaking of A/I). Maybe it is the shine-on Yikes posted about. Maybe its because human intelligence is no less artificial than artificial intelligence.
 
The one thing that I've noticed is that AI models are extremely unreliable and often wrong, even when you copy and paste something. Everything needs to be proofread first. I've also noticed that once it starts making a mistake and then the correction is wrong, it's over. It will never get better and then get worse. I just stopped after two correction attempts.
I have a coworker that used ChatGPT for figuring out what the code requirements are for [insert literally anything]. He's partially or completely wrong most of the time.

I use AI for coding or for finding info (not code related) on a topic I find difficult. It's fairly good with programing and more obscure / complex topics, but by no means perfect. Never gotten a correct answer when it comes to building codes though. It always references the national standard, even when I specify a specific state and it clearly has no idea what applies in specific situations.

I just heard a story today that a surprising percentage of young people would be willing to marry a non-human entity (I couldn't brink myself to listen to the entire thing, but they were speaking of A/I). Maybe it is the shine-on Yikes posted about. Maybe its because human intelligence is no less artificial than artificial intelligence.
Pretty sure that's partially caused by people's loneliness and social isolation that the pandemic accelerated. Lock a bunch of people in their homes for months or years, make them only interact with people over zoom, you end up with a bunch of socially awkward kids and young adults who are way to reliant on technology for everything. I read something recently that AI being constantly nice and never critical is also a driving factor for this (what Yikes posted about). Constant validation (or at least no criticism) is very strong for the depressed generation.

Apparently AI is mostly used for therapy now... which probably isn't good when the AI can only agree with you. This is nuclear level heroin injected directly into people's brain.
 
I don't doubt that at all. If you are an expert in your field, you really see how inaccurate AI is.
You're being generous. Basic reading and critical thinking is all you really need to figure out it's wrong.

ChatGPT when asked a basic question:
1747086800893.png
They got the references right, but the numbers above are from the IBC Ch 29, which isn't adopted in CA.

2022 CPC:
1747086865642.png
1747086827557.png

It's not like this is a complex question. One glance at the CPC table is all you need to figure out the AI is wrong. AI really seems to struggle when there's region-specific requirements that differ from the national standard, even for a state as populous as California.
 
Back
Top