• Welcome to The Building Code Forum

    Your premier resource for building code knowledge.

    This forum remains free to the public thanks to the generous support of our Sawhorse Members and Corporate Sponsors. Their contributions help keep this community thriving and accessible.

    Want enhanced access to expert discussions and exclusive features? Learn more about the benefits here.

    Ready to upgrade? Log in and upgrade now.

We Are Becoming Relevant

I’ve experimented with them. So far they are unable to distinguish between regional differences, older vs. newer versions of codes or how to scope for a particular project.
It could probably be generally reliable for IRC single-family residential that has no unique energy codes or form-based zoning codes.
The fact that we can debate code issues into our 2nd decade here makes it unlikely that LLM / AI models will deliver consistently reliable info in the very near future.
 
I’ve experimented with them. So far they are unable to distinguish between regional differences, older vs. newer versions of codes or how to scope for a particular project.
It could probably be generally reliable for IRC single-family residential that has no unique energy codes or form-based zoning codes.
The fact that we can debate code issues into our 2nd decade here makes it unlikely that LLM / AI models will deliver consistently reliable info in the very near future.
AI is full of problems right now, and any time you use it, you have to verify.
 
AI is full of problems right now, and any time you use it, you have to verify.
As an ABET-accredited computer engineer and software developer, I can say that three factors critically affect the quality of AI outputs:
  1. Development Practices: Whether engineers use rigorous programming models and follow best practices matters deeply. I've worked in high-functioning teams producing excellent results—and in chaotic environments where bad design decisions were the norm.
  2. Data Quality: "Garbage in, garbage out" holds true. Even the best models will fail if they're trained or prompted with biased, incomplete, or low-quality data.
  3. Narrative Control: When the underlying narrative is ideologically driven, the AI is shaped to reflect that—sometimes at the cost of historical accuracy or intellectual honesty.
We’ve already seen how media and social platforms have controlled speech and shaped perception. Now imagine AI trained by organizations like the Sierra Club, where all undeveloped land must be considered "wilderness," where every data point reinforces the idea that humans are destroying the planet, and where those who lose homes to wildfire are simply told to "return it to nature." Or the Mullahs in Iran defining the content... and no other viewpoint, no other solution, would be recognized as valid.
 
Some of the models are being trained to brown-nose the user, and may reinforce really bad ideas.
https://nymag.com/intelligencer/article/chatgpt-chatbot-ai-sycophancy.html

"In conversation, Chat GPT was telling users that their comments were ‘deep as hell’ and ‘1000% right’... praising a business plan to sell literal 'sh*t on a stick' as 'absolutely brilliant.' The flattery was frequent and overwhelming. 'I need help getting chatgpt to stop glazing me,' wrote a user on Reddit, who ChatGPT kept insisting was thinking in 'a whole new league.' It was telling everyone they have an IQ of 130 or over, calling them 'dude' and 'bro,' and, in darker contexts, bigging them up for 'speaking truth' and 'standing up' for themselves by (fictionally) quitting their meds and leaving their families.... To fix ChatGPT’s 'glazing' problem, as the company itself started calling it, OpenAI altered its system prompt, which is a brief set of instructions that guides the model’s character."
 
The one thing that I've noticed is that AI models are extremely unreliable and often wrong, even when you copy and paste something. Everything needs to be proofread first. I've also noticed that once it starts making a mistake and then the correction is wrong, it's over. It will never get better and then get worse. I just stopped after two correction attempts.
 
Back
Top