Skip to Content

Performance Anxiety

Good intentions are one thing, but how do we really know how well our buildings measure up?

Science is what you know. Philosophy is what you don’t know. —Bertrand Russell (1872–1970) English philosopher, mathematician

The days of architects justifying design decisions with interpretations of esoteric philosophy are all but numbered. Wright rearranged clients’ furniture, Le Corbusier’s roofs leaked, and Mies van der Rohe’s Farnsworth house is the epitome of dysfunctional Modernism. Contemporary clients, however, are less accommodating (and more litigious) and rarely consider hubris a desirable quality in the person paid to design the roofs over their heads.

Architecture is a unique form of commercial production in that every building is a prototype — our crash-test dummies are, generally, the previous client. Each building has a unique combination of form, use, construction, systems, site, and project team, each with an impact on performance, energy, environment, cost, and quality. Assurances to clients ride on a plethora of assumptions. The only way to establish if the assumptions are valid is to revisit buildings after occupation, and systematically and objectively monitor, measure, and evaluate their performance. Similarly, the only way to substantively move the practice of architecture forward is to establish practice methodologies based on solid, scientific evidence rather than intuition and anecdote.

The idea is not new. First developed in the 1970s, post-occupancy evaluations (POEs) took a “real world” scientific approach to assessing the performance of buildings and, by extension, the built environment. Incorporating a host of comparative methods, these were typically conducted about two years after occupancy of new buildings and addressed how well the buildings met user needs, their environmental performance and, in some instances, their operating and projected lifecycle cost.

Traditionally, POEs were associated with recently built projects, especially those with ambitious energy and environmental targets or unique technologies. Thus, they generally did not address the energy impact of the existing building stock and had little influence on retrofit and renovation efforts. But perhaps the greatest failing of the approach was its lag time: project teams received feedback years after the initial design work. Designers often felt that their thinking and methodologies had self-evolved enough in the intervening years that the feedback was no longer relevant, making it difficult to change design practices. To ensure that critical lessons took root, feedback needed to be integrated more effectively into the project process, and evaluators needed to assess and report on buildings that designers felt still represented the pinnacle of their technical prowess.

Building Performance Evaluation (BPE) has evolved out of decades of efforts to address these issues. BPE refers to a broader application of POE and scientific assessment techniques; unlike POE, it extends into the construction phase and can be easily applied to existing structures and renovations. In construction phases, these techniques are used as an advanced means of quality control. Triple air-pressure tests, for example, verify airtightness at key stages of completion to ensure that detailing and construction methods are meeting the intended technical standards. Periodic quality checks also ensure that construction crews develop their own skills and processes to more effectively monitor their own work. They also maintain a dialogue about quality between the design and construction teams so that specifications and details can be improved with input from builders. This is a clear advantage to teams who consistently work together.

The process has uncovered problems with “rules of thumb” and regulations, sometimes sending designers unexpectedly back to revisit the fundamental principles of good design. A recent co-heating and thermography survey of masonry townhouses built to 2006 regulation standards in the UK showed massive heat losses through the roofs above party walls. Previous regulations assumed that heat loss from dwellings through party walls was zero. However, the study consistently showed that poor detailing and construction resulted in thermal bridges, a lack of cavity closures, and air gaps, which drove convection currents in the cavities. These were acting as thermosyphons, drawing heat from the adjacent units into the cavities and then to the outdoors through the cavity roof and walls, accounting for up to 30 percent of the building’s total heat loss. Findings like these have the potential to change industrywide practices, influencing both regulations and strategic investment.

BPE can also serve to test theoretical assumptions and calculations. For example, field tests that measure the heat flux and thermal conductivity through walls have shown variations ranging from roughly 5 percent to 20 percent of theoretical values, with certain constructions and fabrication techniques consistently more reliable than others. This empirical knowledge of inherent variations is applied in Scandinavia, where designers adjust the theoretical thermal conductivity values of construction assemblies twice in design calculations to give more realistic predictions of completed performance. They factor in one variable to account for uncertainties in the properties and dimensions of building materials and the resulting inconsistencies in craftsmanship, and another to adjust for the effect the assembly complexity has on its performance. This prevents them from assuming that an overly complex construction that is difficult to implement on site is more thermally effective than it’s likely to be.

Although construction-phase monitoring can improve quality, and scientific assessments can evaluate technical assumptions, they are still not enough to ensure that performance expectations are met. In 2009, the Usable Buildings Trust and Building Services Research and Information Association (BSRIA), both based in the UK, launched the “Soft Landings” framework to respond to the need for immediate feedback and increased user support as well as to provide the opportunity for more extensive assessments. Studies had found that buildings weren’t used as designers envisaged, often because of misunderstood design intentions, poorly executed design features, and inadequate user training, sometimes with drastic effects on energy use and performance. Soft Landings is intended to increase the intensity of designer engagement both before and after initial occupancy. A residency period during the first weeks of occupation gives the design team a structured time period in which to carry out quality assessments that must be done while the building is operational, to support and advise the client and users, and to learn from working in their own building. The process is akin to “sea trials” in naval architecture, where a boat’s design and robustness is tested in real-life scenarios as part of the commissioning process. In practical terms, Soft Landings aids in risk management by using BPE methods to anticipate problems.

But perhaps the greatest value of Soft Landings is its potential to boost the quality and rigor of the research that is key to ensuring relevant lessons are extracted and that the root causes of problems are addressed appropriately in future projects. The whys are always more important than the whats. For example, data collected on a new primary school as part of a two-year joint BPE research project between Architype Ltd. and Oxford Brookes University showed a spike in gas use over the summer break. The detailed nature of the data-collection methods allowed researchers to identify exactly the weeks in which the boilers were burning, which led them to the cause: When the boilers were serviced just before the summer break, the mechanic overrode the automatic controls to check his work but never re-engaged them when he left, leaving the boilers running all summer. The findings led to specific recommendations to the client for improved management and modifications to the designer’s own client-handoff process (a more formal, extended process in the UK than it is in the US), to reduce the likelihood of similar problems on future projects.

The temptation is to sanitize findings such as this and to give a figure for the buildings’ potential performance without operational slip-ups — a temptation that should be resisted. The X-factor effect of the occupants’ presence is as important as the quality of the building’s design and construction. Designers must accept that their buildings are rarely used as they anticipate, however frustrating that may be. The haze of unrealistic expectations will dissipate with comprehensive knowledge of how buildings are used and also lead to more robust assumptions in design phases, better expectation management, more realistic predictions of performance, and reasonable expectations of occupants. The all-too-human tendency to overpromise and underdeliver is not one that the profession will survive in a competitive environment. But firms that see opportunity in these techniques can develop more comprehensive services for clients who understand the difference between assuring and ensuring performance.

Although BPE is a science, it’s not an exact science. Sometimes spurious data is recorded (such as when schoolchildren make a game of breathing on a CO2 sensor to make the count on the digital readout go up and down), and sometimes the answers from scientific analysis are ambiguous, with no clear resolution. Not all problems have simple solutions; scientific answers can be more baffling than the questions. However, every question has a means of investigation, and although the complexity of buildings in operation can be overwhelming, ignorance should not be the accepted default. The scientific evaluation of building performance is the only way for our industry to move forward and meet the expectations of the societies we serve.