Advanced computer learning – artificial intelligence (AI) – has arrived on the scene in three waves.

In the 1980s, the first wave involved software developed to manipulate documents according to rules written to guide its decisions.

Around 2011, deep learning emerged. We began training AI with models that used large amounts of data and algorithms. Nearly all AI activity today involves deep learning.

In the third wave, deep learning now involves large language models (LLMs). More of this will be coming on the scene as the number of AI engineers grows. Innovation curves and the speed of innovation will continue to increase, with transformation taking place over months, not years.

In the short term, AI changes our relationships with documents and the information in them. It automates and augments, making up for the lack of skilled workers. We are already facing the challenge of having enough people to fill jobs and reduced competition for jobs. So, the concern about computers taking jobs away is less of a concern than originally thought. The aspects of construction that people like least – the “grunt” work – will become more automated or augmented. Despite that, the results will still need to be reviewed by a person.

In the long term, AI will be able to act as an agent in the world. We will have more models and products that enable us to tell AI to perform certain tasks. The ultimate question is how many steps in achieving those tasks will we be able to trust it with.

For the moment, many people agree we can’t fully trust AI. It will take a while before we can trust it reliably, because we don’t know how it fails. This is the biggest problem with AI.

Indeed, even ChatGTP users can tell it’s not perfect in its results. However, we don’t know how it’s imperfect. AI will make mistakes a person wouldn’t, and it doesn’t make mistakes a person would. Thus, it cannot replace a human being. A computer doesn’t have that “little voice” that tells it something is not quite right with a decision. While a person can ask themselves, “Is this the right thing to do?”, AI systems are not built to question themselves. They will confidently generate answers that are wrong. Humans need to be in the loop, but we’re not yet sure when that needs to happen.

We are beginning to trust AI in steps. It’s up to us to know how AI fails and be on the lookout for that. We must know its weaknesses, just as we would with human employees. We can use AI just like a brainstorm partner to spark creativity, but we need to check its work.

As a data-driven tool, AI can help make decisions about resource allocation. It can find the optimal solution when provided with the right information based on parameters. The key is providing a full spectrum of input to eliminate preconceptions. For instance, if no information on wood is input, there will be no output decisions that include wood.

AI can enable the use of unstructured data. Using modern models, AI can look at unstructured or partially unstructured data and calculate how to use it. LLM and a much better toolkit exist today to train input. AI can accelerate problem solving. While the data will always be an issue, the key is using the right questions.

Bridging the Gap Podcast, episode 213 with guest Hugh Seaton of The Link "Construction in the Age of AI"
Bridging the Gap Podcast, episode 213 with guest Hugh Seaton of The Link “Construction in the Age of AI”

Tune in to this special Bridging the Gap podcast, episode 213, with Hugh Seaton and learn more about the game-changing potential of AI in construction.


Download Download Download

Download Download
Categories: Articles