• Welcome to The Building Code Forum

    Your premier resource for building code knowledge.

    This forum remains free to the public thanks to the generous support of our Sawhorse Members and Corporate Sponsors. Their contributions help keep this community thriving and accessible.

    Want enhanced access to expert discussions and exclusive features? Learn more about the benefits here.

    Ready to upgrade? Log in and upgrade now.

And So It Begins

Thanks, but I want to clarify.
The way I currently get out in front of standard plan check correction lists is to proactively co-opt them for my own purposes. See this example from Los Angeles Dept. of Bldg. and Safety for multifamily housing: https://dbs.lacity.gov/sites/default/files/efs/forms/pc17/PC.STR.Corr.Lst.18-(Rev.-01-01-2023)--.pdf

In the past, prior to initial plan check submittal, I've proactively printed that correction sheet, crossed out the non-applicable comments, and handwritten next to each comment where the answer could be found on the initial plan check submittal. I upload that pre-responded correction sheet at time of first plan check, effectively saying to the plan checker, "we already did your work for you, so don't waste our time by not looking closely during initial plan check".

Assuming for the moment that LADBS will not embrace any AI plan check program in the near future ( no fault of yours, it will just be due to the typical power plays at city hall), your program could still have value at time of initial plan check by publishing their checklists with the responses ("where to find it") already on them.

In other words, a totally clean set of plans with no correction printouts from your program will not convince a manual plan checker that it complies with code, they'll just be wondering if you missed something; thus it will not save them any time (until the day comes that they fully trust your program). However, a standard correction list already cross-referenced to where the compliance can be found on the plans WILL be more productive within the context of their current bureaucracy.
You’re exactly right about how LADBS and other AHJs operate, and that’s the main reason why we’re not targeting building departments as the primary users of PlanCheckPro.AI. The reality is, AHJs are bound by red tape and bureaucracy, so adoption on their side will take time.

Our focus is on the people who feel the delays most: owners, contractors, architects of record (AORs), and engineers of record (EORs). These are the users who want to submit a cleaner set of drawings the first time, avoid boilerplate correction cycles, and keep projects moving without unnecessary RFIs and resubmittals.

So to be clear: PlanCheckPro.AI isn’t trying to convince AHJs to change overnight. It’s giving project teams a way to stay ahead of the curve, reduce friction at submittal, and save real time and money in the process.
 
So yes, skepticism is warranted, but this isn’t vaporware or a cash burn experiment. This is a firm that already does plan review and inspections, applying AI to make the process faster, cleaner, and more consistent. And it’s already in live use.
A more focused AI would certainly provide better info and be more reliable, but what about actually reviewing the plans? I assume its using image recognition for the review. In an ideal world, every plan would be perfect and legible, but we all know that's not always the case. How does your model account for messy plans? More importantly, what is more important to the AI - written words or the linework? What I mean is, sometimes the text and linework conflict with each other. If one is correct, is the other ignored? Or is it "smart" enough to catch those types of errors?

A few more questions:

1. I'm assuming some internal studies have been performed to test how accurate your program is. How often does it provide an accurate comment (comment is applicable)? How often does it miss something on average, or give a comment that isn't applicable?

2. How much oversight does this need?

3. You said there is no cost to this (comment #8 above). Maybe I missed something where you explained this, but how does the company make money? Like I said before, AI companies are created and destroyed daily regardless of their valuation. Why should I try out a program if the program may not be around in a few months? I'm not trying to say you're about to fail (idk what your finances look like). Just asking because I'm a hardcore skeptic with these types of things and am not going to invest time in a new program without some reassurances that there's a way for you to survive - I'm a long term investor for lack of a better term. I'm always happy to be proven wrong about stuff like this :).
 
A more focused AI would certainly provide better info and be more reliable, but what about actually reviewing the plans? I assume its using image recognition for the review. In an ideal world, every plan would be perfect and legible, but we all know that's not always the case. How does your model account for messy plans? More importantly, what is more important to the AI - written words or the linework? What I mean is, sometimes the text and linework conflict with each other. If one is correct, is the other ignored? Or is it "smart" enough to catch those types of errors?

A few more questions:

1. I'm assuming some internal studies have been performed to test how accurate your program is. How often does it provide an accurate comment (comment is applicable)? How often does it miss something on average, or give a comment that isn't applicable?

2. How much oversight does this need?

3. You said there is no cost to this (comment #8 above). Maybe I missed something where you explained this, but how does the company make money? Like I said before, AI companies are created and destroyed daily regardless of their valuation. Why should I try out a program if the program may not be around in a few months? I'm not trying to say you're about to fail (idk what your finances look like). Just asking because I'm a hardcore skeptic with these types of things and am not going to invest time in a new program without some reassurances that there's a way for you to survive - I'm a long term investor for lack of a better term. I'm always happy to be proven wrong about stuff like this :).
Great set of questions, and I’ll do my best to answer without diving into proprietary details.

On the plan review side: you’re right that not every set of plans is clean and consistent. The system is built to interpret both written notes and linework, but we don’t rely exclusively on one or the other. When there’s a conflict, the AI does what a human reviewer would, it flags the inconsistency so the design team can resolve it. The goal isn’t to “pick a winner” between text vs. linework, it’s to make sure the conflict is visible early so it doesn’t become an RFI or a failed inspection down the road.

To your specific points:
1. Accuracy – We’ve done extensive internal testing, and I’ll just say this: the outputs are accurate enough that we’ve rolled it out live on real projects. No system, human or AI, catches 100% or eliminates all false positives, but what we’re seeing is consistent, code-tied comments that materially reduce back-and-forth during permitting. The system is more than 95% accurate. We have been testing internally for the last year with hundreds of projects. We have a department within Pacifica Engineering Services that focus 100% on plan reviews on a day to day basis. So we have the expertise's and data required for testing and training the model.

2. Oversight – Think of it as an accelerator, not a replacement. The AI handles the first pass and generates structured comments, and then it’s up to the design team, owner, or private provider to review, resolve, or override. It’s meant to cut the hours spent combing through plan sets, not to eliminate professional judgment.

3. Cost / sustainability – Right now, we’ve opened access at no cost because adoption and feedback are more valuable to us at this stage than monetization. But the company behind it, Pacifica Engineering Services (https://pacificaes.com), isn’t a startup burning VC money. We’re an established Florida engineering firm that already performs plan review and inspections for building departments and private clients. PlanCheckPro.AI is an extension of services we already provide, not our only business model. That means it’s not going away in a few months; it’s part of a long-term strategy to modernize how permitting and QA/QC get done.

So in short: we’re not trying to sell hype, we’re applying a focused tool to a very real problem we already deal with every day.
 
Yes, the workflow is exactly that: you upload your plans, the system runs a full code check, and you get back a detailed correction report with code citations and corrective actions.

So your sole intent on being here is to plug an energy-sucking AI system that if it actually works (which it most certainly doesn't), will put half the people on this forum out of work.

Brilliant.
 
So your sole intent on being here is to plug an energy-sucking AI system that if it actually works (which it most certainly doesn't), will put half the people on this forum out of work.

Brilliant.
Even if it does work, you still need people to oversee it and check it. Y'all will still have a job, it'll just probably be a more annoying job...

Idk if it actually works or not, I've never tired it and won't be trying it (I don't work in Florida). I take the same approach to promises made by AI devs as I do with crypto. "Everything's a scam until proven otherwise."

Also, their parent company uses a BUNCH of AI images on their website. Makes me hesitant to believe anything they say tbh... It's a red flag when I see that imo.
 
In the past, prior to initial plan check submittal, I've proactively printed that correction sheet, crossed out the non-applicable comments, and handwritten next to each comment where the answer could be found on the initial plan check submittal. I upload that pre-responded correction sheet at time of first plan check, effectively saying to the plan checker, "we already did your work for you, so don't waste our time by not looking closely during initial plan check".

That's a good approach. In fact, IMHO it's an excellent approach.

I get asked periodically if we require applicants to submit an ICC Plan Review Record. We don't -- and there's nothing in the code that allows us to require it. WE have had applicants submit them anyway, and they've been universally useless. The ones we have received have just gone through the form and entered "Complies" for everything -- with no mention of how they comply, or where a plan reviewer can look at the construction documents to find the information need to verify compliance.
 
I keep being told that AI is the future of our business (plan review). Maybe it is...the future. I am inherently skeptical of "AI", not because I don't think it is possible, but because I haven't seen it well defined. So what is AI? What is machine learning? What makes either not just a set of If-Then scenarios?

As I thought about this, I come back to my opinion that a lot of real problems come from a poor code analysis, which leads to many more problems as reviews are conducted. For example, when a plan comes in and is type II construction, how does the AI hold that data? Say there is wall section 40 pages in, does the AI recognize this and compare it to the definition of type II construction, or does it stop at allowable area, stories etc.? Does it flag that miniscule detail out of the GB's of data? Then again, do humans? (I ask because this isn't that uncommon.)

As I said I am skeptical, but also believe I have an open mind. And I am ambivalent about it putting me out of a job. Maybe close enough to retirement, or far enough away from caring....who can tell. I kind of figure that given the current state of the industry, if we don't get AI we may be in really bad shape.

I spoke with ICC about their research into this a couple of years ago. They said they were a long way away, but in this type of tech, two years is a lot. My company is/was exploring this, and we have spoken about proofing it, but it hasn't gone anywhere, though not sure if that is due to lack of availability, cost, or corporate gobbledygook.

Finally, if AI can figure out a way to make my job obsolete, how big a leap would it be to decide my very existence is obsolete? Only half kidding with that!
 
I keep being told that AI is the future of our business (plan review). Maybe it is...the future. I am inherently skeptical of "AI", not because I don't think it is possible, but because I haven't seen it well defined. So what is AI? What is machine learning? What makes either not just a set of If-Then scenarios?
I'm really into AI, and very skeptical of every single thing I see be promised. AI will almost certainly be like the internet; something that completely and totally changes the world. But it will also be like the internet; a million promises, people throwing stuff at the wall to see what sticks, do some things that make precisely zero sense in hindsight, and only the mostly useful things survive the collapse. We're at the bubble phase imo. Every great invention has a bubble phase.

As I thought about this, I come back to my opinion that a lot of real problems come from a poor code analysis, which leads to many more problems as reviews are conducted. For example, when a plan comes in and is type II construction, how does the AI hold that data? Say there is wall section 40 pages in, does the AI recognize this and compare it to the definition of type II construction, or does it stop at allowable area, stories etc.? Does it flag that miniscule detail out of the GB's of data? Then again, do humans? (I ask because this isn't that uncommon.)
Every AI I've used have made some pretty big mistakes. I'll ask a question about code and get a good response that isn't applicable because of some exception in another part of code that I didn't spell out in great detail. Or it'll tell me something that doesn't exist while giving me a link to something completely unrelated. Or tell me something exists, reference the correct code, but use language from something completely different. Even the AIs I train off my data or AIs that are geared to specific fields make these types of mistakes. Humans aren't much better when it comes to making mistakes, but AI can be perfect one second and a bumbling mess the next in a way most humans aren't. With people, you can get a good idea for what to expect. With current AI, it's a coin toss at best.
 
So your sole intent on being here is to plug an energy-sucking AI system that if it actually works (which it most certainly doesn't), will put half the people on this forum out of work.

Brilliant.
That’s not the intent here at all. PlanCheckPro.AI wasn’t created to replace people or take work away from professionals, it was built to make the process cleaner for owners, contractors, architects of record, and engineers of record.

We know AHJs are bound up in red tape, and that’s not the fight we’re picking. This tool is about helping project teams get ahead of the correction cycles, reduce RFIs, and cut down on wasted time and money. At the end of the day, AHJs still hold the authority, and professional oversight is still required.

What we’re offering is a way to move faster and smarter in an industry that’s already stretched thin. If anything, it helps free up people on both sides of the table to focus on the work that requires judgment and expertise, not busywork.
 
Even if it does work, you still need people to oversee it and check it. Y'all will still have a job, it'll just probably be a more annoying job...

Idk if it actually works or not, I've never tired it and won't be trying it (I don't work in Florida). I take the same approach to promises made by AI devs as I do with crypto. "Everything's a scam until proven otherwise."

Also, their parent company uses a BUNCH of AI images on their website. Makes me hesitant to believe anything they say tbh... It's a red flag when I see that imo.
Fair point, no system should ever be “blindly trusted.” You’re right that oversight will always be needed, and that’s exactly how PlanCheckPro.AI is designed: as a QA/QC accelerator, not a replacement for professional judgement.

As for credibility, this isn’t coming from a random startup with a slick website. PlanCheckPro.AI is developed by Pacifica Engineering Services (https://pacificaes.com), a Florida-based engineering firm. We’re not a software outfit; one division of our firm already performs Plan Review and Inspections directly for building departments. On top of that, we’ve worked for some of the largest developers and contractors in the country.

We’ve also been recognized repeatedly for growth and performance:
  • Inc. 5000 – America’s Fastest Growing Companies (2023, 2024 & 2025)
  • Gator100 – University of Florida’s fastest-growing Gator-led companies (#6 in 2023, #28 in 2024, #20 in 2025)
  • South Florida Business Journal Fast 50 and Business of the Year (2023, 2024 & 2025 (2nd Fastest Growing Business))
  • Zweig Group Hot Firm List (2024 & 2025) - Top 100 Engineering Fastest Growing Firms in the country.
Those awards don’t happen by accident, they’re a reflection of consistent delivery on complex, high-profile projects.

So I understand the skepticism, AI is full of hype and overpromises. But this isn’t a scam or a gimmick. It’s a tool built by a firm that’s already doing this work at scale in the real world, for real clients.
 
That's a good approach. In fact, IMHO it's an excellent approach.

I get asked periodically if we require applicants to submit an ICC Plan Review Record. We don't -- and there's nothing in the code that allows us to require it. WE have had applicants submit them anyway, and they've been universally useless. The ones we have received have just gone through the form and entered "Complies" for everything -- with no mention of how they comply, or where a plan reviewer can look at the construction documents to find the information need to verify compliance.
That’s exactly the gap we’re trying to close. A generic “Complies” stamped next to a checklist is useless, it doesn’t help the reviewer, and it doesn’t help the design team build confidence in their submittal.

With PlanCheckPro.AI, the output isn’t just a binary complies/doesn’t comply. The system ties every comment back to the specific code section and, where applicable, identifies where in the plans compliance is shown. Instead of leaving the plan checker wondering, it’s giving a direct roadmap: “see sheet X, detail Y, note Z.”

And to be clear, our intent isn’t to target building departments as the main users. We know AHJs are constrained by bureaucracy and red tape. The people who benefit most from this are owners, contractors, AORs, and EORs, because they’re the ones trying to avoid wasted cycles, RFIs, and delays. Having a report that mirrors the kind of correction list you described, but with the answers already mapped to the plans, gives those teams a way to get ahead of the process before the first submittal even lands at the building department.
 
I keep being told that AI is the future of our business (plan review). Maybe it is...the future. I am inherently skeptical of "AI", not because I don't think it is possible, but because I haven't seen it well defined. So what is AI? What is machine learning? What makes either not just a set of If-Then scenarios?

As I thought about this, I come back to my opinion that a lot of real problems come from a poor code analysis, which leads to many more problems as reviews are conducted. For example, when a plan comes in and is type II construction, how does the AI hold that data? Say there is wall section 40 pages in, does the AI recognize this and compare it to the definition of type II construction, or does it stop at allowable area, stories etc.? Does it flag that miniscule detail out of the GB's of data? Then again, do humans? (I ask because this isn't that uncommon.)

As I said I am skeptical, but also believe I have an open mind. And I am ambivalent about it putting me out of a job. Maybe close enough to retirement, or far enough away from caring....who can tell. I kind of figure that given the current state of the industry, if we don't get AI we may be in really bad shape.

I spoke with ICC about their research into this a couple of years ago. They said they were a long way away, but in this type of tech, two years is a lot. My company is/was exploring this, and we have spoken about proofing it, but it hasn't gone anywhere, though not sure if that is due to lack of availability, cost, or corporate gobbledygook.

Finally, if AI can figure out a way to make my job obsolete, how big a leap would it be to decide my very existence is obsolete? Only half kidding with that!
Sifu, I appreciate the way you laid this out. You’re asking the right questions, not the buzzword ones, but the practical ones that actually matter to reviewers.

When we say “AI” in the context of PlanCheckPro.AI, we’re not talking about some abstract future machine consciousness. At its core, it’s machine learning trained on real plan sets, real code, and real correction cycles. The difference between this and a simple if/then rules engine is that the system can handle variations, different drafting conventions, different ways notes are written, different ways compliance is documented. It’s not looking for a single trigger, it’s interpreting patterns and then tying those back to code citations.

On your Type II example: yes, the system is designed to carry data like construction type through the review. It doesn’t just stop at allowable area or stories. If there’s a wall section buried on page 40, it can pick that up, compare it against the requirements for Type II, and either confirm compliance or flag an inconsistency. Will it catch every nuance the way a sharp, experienced reviewer would? Not always. But it will consistently surface those “miniscule details” across gigabytes of data that humans often miss simply because of time pressure. That’s the real value, consistency and speed.

And to your bigger point: this isn’t aimed at putting plan reviewers out of work. Honestly, with the backlog and shortage of qualified reviewers, if we don’t start integrating tools like this, the system is going to collapse under its own weight. AHJs will always hold authority, and professional oversight will always be required. What AI can do is reduce the busywork so reviewers spend their time on the judgment calls that actually matter.

For context, this isn’t coming from a tech lab, PlanCheckPro.AI is developed by Pacifica Engineering Services, a firm that already performs Plan Review and Inspections directly for building departments. We live these bottlenecks every day. That’s why we built the tool: not to replace people, but to help keep projects and departments from drowning in volume.

I share your skepticism, AI is full of hype. But this isn’t about hype, it’s about solving a very real bottleneck in our industry with a tool that’s already in live use.
 
These are the users who want to... avoid boilerplate correction cycles...

That's a good approach. In fact, IMHO it's an excellent approach.

I get asked periodically if we require applicants to submit an ICC Plan Review Record. We don't -- and there's nothing in the code that allows us to require it. WE have had applicants submit them anyway, and they've been universally useless. The ones we have received have just gone through the form and entered "Complies" for everything -- with no mention of how they comply, or where a plan reviewer can look at the construction documents to find the information need to verify compliance.
What I'm trying to convey to Wesley is that when there's a plan checker whose standard procedure is to crank out an initial boilerplate correction list, there's no way to avoid it, with or without AI. The only thing you can do is pre-empt it by first doing to yourself as the DPOR, then doing your own response to it, then turn that response to plan checker, to take the wind out of their sails.
Wesley said his AI won't do that kind of pre-emptive positive report, so I don't see how the claim can it will reduce boilerplate correction cycles.

By the way, the response "complies" or "done" is a real rookie move made by many an architectural intern / recent grad. It does nothing to move the ball down the field.
 
Last edited:
I'm looking forward to being cleared for access to test it. I have several recent projects I've reviewed that all generated a number of comments on the first try. I'm going to be interested in comparing the AI comments against my comments.
 
That’s not the intent here at all. PlanCheckPro.AI wasn’t created to replace people or take work away from professionals, it was built to make the process cleaner for owners, contractors, architects of record, and engineers of record.

Quoting party talking points doesn't change the facts.

Here's the thing, bud: we're not a bunch of glassy-eyed teenagers willing to swallow party talking points. Go away.
 
Having been involved in machine readable codes and ai plan review on a national level, I can say we are a LONG way from jobs getting replaced.

ai companies also don't want their software to be the only check on code compliance. Why? If a person makes a mistake, it's fine. We accept that. People make mistakes. When technology makes a mistake, no one ever trusts it again and their product becomes worthless.

The way most companies appear to be approaching this is that the software can do the baseline review and flag potential issues to the designer and/or plan reviewer. In our discussions I expressed how advantageous it would be to have both the design and regulatory side both working off the same system (the thought at the time was only regulators would be using it). If designers run their plans through the software first, they can run down all the flags and correct the ones that are legitimate issues and submit explanations on the ones that are not actually issues. This avoids a first round of reviews and saves weeks to months in review.

Designers will likely be producing sets that have improved code compliance right out the gate, likely reducing costs associated with change orders.
Regulators will likely be able to issue permits faster with improved plan sets and focus their review on more detail and/or complex issues where ai doesn't perform as well.
 
I did a review last year on an R2 project. Not a huge or complicated one. One of the most challenging reviews I have done, but it was the plans that made it so, not the project. I can't be certain, but it felt like it was run by some sort of AI program. There were 10 full plan pages of code analysis. The analysis was arranged sequentially starting with adopted codes, AHJ specific requirements, accessibility then started with IBC ch.3 and went through the entire IBC. It listed what I assume it thought were applicable code sections, reprinted the code language, then SOMETIMES inserted some sort of explanation for compliance. The first problem was how it determined what was applicable. The 2nd problem was the massive amount of data with no explanation, just code reprint. The 3rd problem was the explanation was sometimes completely wrong or not applicable. Finally, it then duplicated the exact same data on the plan pages applicable to each section. To top it off, it seemed that the code analysis and the plans themselves had a hard time with communication since in a lot of instances what they showed in the code analysis wasn't demonstrated or was contradicted by the plans. The entire thing ended up being useless, took up space and time with zero benefit. It took 3 reviews, with multiple dozens of comments to get it done. It felt like the process should have been what you are talking about, which would have been internal QA/QC. Instead they just spit it out on the plans and gummed it up. If it was AI, maybe it was bad AI, maybe it was bad AoR, maybe the fancy new tool just wasn't ready for prime time. No matter the reason, it made me develop a very bad taste for that sort of thing. I would rather have had an honest attempt at real architect stuff...warts and all.

I feel like the level of effort in designers and building officials is spiraling down. To me there is a choice to improve the humans, or move on. I would like to improve the humans, but I am afraid that ship has sailed. Now with the promise of AI (I am beginning to get very fatigued on that acronym) I feel like attempts at human improvement will go away completely.
 
The way most companies appear to be approaching this is that the software can do the baseline review and flag potential issues to the designer and/or plan reviewer. In our discussions I expressed how advantageous it would be to have both the design and regulatory side both working off the same system (the thought at the time was only regulators would be using it). If designers run their plans through the software first, they can run down all the flags and correct the ones that are legitimate issues and submit explanations on the ones that are not actually issues. This avoids a first round of reviews and saves weeks to months in review.

My view is this: many plans submitted for review are already problematic as it is. As I've ranted about elsewhere, I've seen professional firms issue plans that had such failings as:
- Incorrect exit shaft configuration
- Incorrect/missing attributions for fire separations (using part 9 for a part 3 building, for example)
- incorrect fire separations
- incorrect building classification [such as a two-storey building being classified as a one-storey building!]
- incorrect barrier free (citing other Codes/American codes)
- incorrect exit paths/door swing/exit location
- incorrect guard heights
- incorrect stair configurations
- using incorrect local data for seismic forces

I can grasp small human errors. I make them myself, so I can't be critical of others. Yet ... there is point where plans should have a basic level of competency. They currently don't. Given the utter slop that AI generally creates when given even basic technical information (Last week, Gemini was telling people who have gluten sensitivities not to drink milk, because milk has gluten in it....), I am terrified of the idea of receiving any plan where AI has done anything more than compose the email accompanying the plans.
It felt like the process should have been what you are talking about, which would have been internal QA/QC. Instead they just spit it out on the plans and gummed it up. If it was AI, maybe it was bad AI, maybe it was bad AoR, maybe the fancy new tool just wasn't ready for prime time. No matter the reason, it made me develop a very bad taste for that sort of thing. I would rather have had an honest attempt at real architect stuff...warts and all.

QED.
 
My view is this: many plans submitted for review are already problematic as it is. As I've ranted about elsewhere, I've seen professional firms issue plans that had such failings as:
- Incorrect exit shaft configuration
- Incorrect/missing attributions for fire separations (using part 9 for a part 3 building, for example)
- incorrect fire separations
- incorrect building classification [such as a two-storey building being classified as a one-storey building!]
- incorrect barrier free (citing other Codes/American codes)
- incorrect exit paths/door swing/exit location
- incorrect guard heights
- incorrect stair configurations
- using incorrect local data for seismic forces

I can grasp small human errors. I make them myself, so I can't be critical of others. Yet ... there is point where plans should have a basic level of competency. They currently don't. Given the utter slop that AI generally creates when given even basic technical information (Last week, Gemini was telling people who have gluten sensitivities not to drink milk, because milk has gluten in it....), I am terrified of the idea of receiving any plan where AI has done anything more than compose the email accompanying the plans.


QED.
It’s sounds like Wesley’s program will be really helpful for catching that.
Most of those sound like somebody starting with a “boilerplate” CAD or BIM template for a project. They may just grab stuff from their last, perhaps unrelated, project.

I’ve junior staff submit a small single family residence for plan check and the title sheet lists a NFPA 13 sprinkler system as a deferred approval. They just copy/pasted notes from a large apartment project without thinking. Or maybe they tell themselves they’ll edit it later, and forget to go back and do it. And it’s not a code violation to put a full NFPA 13 system in a house, so yes it meets code and is approvable, so it won’t get flagged by the plan checker.

That said, the buck always stops with A/E firm management to clearly state project parameters, to force staff to literally red-flag their own copy/paste work on a file for further review, and to do a full DPOR level QC/ supervision.
 
You guys should sign up for a demo and test it out.
This is the only way any one of us can actually verify what they're saying. If I worked in Florida or had any projects in Florida, I'd probably spend a weekend testing this thing out.

But I don't, and I have enough AI testing and training as is. It seems like very useful tech for architects and designers, so I'm hoping someone can prove it's usefulness, refine it, and apply it to California.

What I personally really want is an AI that can turn field measurements into a CAD plan. ChatGPT can do part of this, but, to put it nicely, it's very limited. If I could just feed an AI a hand-drawn out of scale plan with some dimensions and it spit out a 2D CAD plan, I'd be a very happy camper, even if it wasn't 100% accurate.
 
This is the only way any one of us can actually verify what they're saying. If I worked in Florida or had any projects in Florida, I'd probably spend a weekend testing this thing out.

But I don't, and I have enough AI testing and training as is. It seems like very useful tech for architects and designers, so I'm hoping someone can prove it's usefulness, refine it, and apply it to California.

What I personally really want is an AI that can turn field measurements into a CAD plan. ChatGPT can do part of this, but, to put it nicely, it's very limited. If I could just feed an AI a hand-drawn out of scale plan with some dimensions and it spit out a 2D CAD plan, I'd be a very happy camper, even if it wasn't 100% accurate.
It would be worth testing it out for the 95% of the codes that are exactly the same. Just try it.
 
My view is this: many plans submitted for review are already problematic as it is. As I've ranted about elsewhere, I've seen professional firms issue plans that had such failings as:
- Incorrect exit shaft configuration
- Incorrect/missing attributions for fire separations (using part 9 for a part 3 building, for example)
- incorrect fire separations
- incorrect building classification [such as a two-storey building being classified as a one-storey building!]
- incorrect barrier free (citing other Codes/American codes)
- incorrect exit paths/door swing/exit location
- incorrect guard heights
- incorrect stair configurations
- using incorrect local data for seismic forces

I can grasp small human errors. I make them myself, so I can't be critical of others. Yet ... there is point where plans should have a basic level of competency. They currently don't. Given the utter slop that AI generally creates when given even basic technical information (Last week, Gemini was telling people who have gluten sensitivities not to drink milk, because milk has gluten in it....), I am terrified of the idea of receiving any plan where AI has done anything more than compose the email accompanying the plans.
Of course. As the saying goes, garbage in - garbage out. The output from the AI system has to be reviewed by the RDP. If the AI system produces garbage based on good plans, presumably no one will use the software. If the RDP submits the garbage, a complaint against them must be filed.

I while ago, I had an appeal where the RPD was mixing Part 9 and Part 3 assemblies. I really struggled with if I should report this person to the engineering society. He was claiming to be skilled in fire protection engineering. My worry was that this would cause people to be too cautious in accessing the appeal system, thus denying them access to justice. I ultimately decided that was not my role, but I always hoped the originating building department had filed a complaint against him.

Sometimes I worry we are hoping that the poorer RDPs are just going to magically get better somehow. Change is hard and people need incentive to do it. It's really easy to blame the issues they face in plan review on the building department. In my mind the only way we are going to move forward is if complaints are lodged against RDPs that are committing these cardinal code sins.
 
Back
Top