For centuries architects have begun the design process by sketching concepts and geometric forms. Whether the designer puts stick to dirt, pen to paper or mouse to mousepad the output is much the same: a visual representation of the project in question. But this centuries old notion of how to begin the design process is changing. In firms around the world, architects are kicking off the process with scripts, algorithms and simulations. They feed detailed project data and requirements into these programs and are fed back multiple design iterations optimized to meet these requirements. This process has been dubbed “computational design” and “generative design”, terms that seem to be used almost interchangeably.
For instance, a client wanting to build a new corporate headquarters may have a list of quantifiable requirements.
- LEED certification
- 3 conference rooms per floor
- a cafe on each floor, each with a west-facing view
- five 100 sq ft offices per floor that do not share walls with the conference rooms
- three team workspaces that can accommodate 7 desks and maximize collaboration within the team but not interfere with other teams in the area
These requirements are entered into a script which generates design alternatives all adhering to the original set of rules. In the above example, the architect is mostly asking layout related questions but they could just as easily be feeding in data related to complex structural challenges (think “Bird’s Nest” from 2008 Beijing Olympics).
Depending on the complexity of the structure, dozens or even hundreds of alternatives may be generated over the course of the project. This is the type of time-consuming computational work that is ideally suited to a computer. Now it is up to the architect to use one of these alternatives to design a highly functional and beautiful corporate headquarters that reflects the client’s aesthetic, cultural and corporate values – work that is ideally suited for a human.
Zaha Hadid and Frank Gehry’s use of computational design has been well documented as have many smaller firms who have partnered with academic institutions such as the computation group in MIT’s Department of Architecture.
However, the past several years have seen generative design become accessible to nearly anyone. Many firms are using off-the-shelf scripting environments such as Grasshopper for Rhino and Dynamo for Revit.
And in a twist that is unusual for the AEC industry, a company called Flux has created a platform that allows these and other rule-based solutions such as simple spreadsheets and in-house scripts to exchange data. This is a boon for an industry that rarely relies on a single design tool and consistently struggles with data interoperability.
As with pending sea-change in any industry the adoption of generative design practices raises questions about the future of both the profession and the existing tools. This isn’t a “death of CAD” or, more broadly, “death of representative design” moment. The data and design parameters generated by these algorithms aren’t meant to replace a 3D model or floorplan.
Instead they are an upstream process that will inform those models and floorplans. Nor does reliance on generative design make the architect any less pivotal to the design process. Daniel Davis provides great insight in his article, Why Architects Can’t Be Automated, in Architect Magazine.
Davis points out that “University of Oxford researchers Carl Frey and Michael Osborne have estimated that architects are one of the least likely professions to be automated in the next 20 years. They give architects a 1.8-percent chance of being automated, compared to a 93.5-percent chance for accountants, and an 89.4-percent chance for taxi drivers.”
This is down to the fact that architects spend much of their time collaborating with clients, finding mutually agreeable solutions and making qualitative decisions – all tasks that humans tend to do better than computers.
The fact that architects and representative design aren’t going away anytime soon doesn’t mean that firms won’t make adjustments to capitalize on the possibilities of computational architecture. As Daniel Davis notes, this time in his excellent article “The Next Generation of Computational Design”, firms are going beyond off-the-shelf solutions – with increasing frequency firms such as HDR and NBBJ are hiring software engineers to be computational designers who develop scripts to aid in rule-based design.
WIth this influx of software developers some firms are beginning to adopt a more Silicon Valley like approach to projects. In these firms, the fast pace of iteration in generative design is no longer viewed as simply a convenient characteristic of the conceptual/programming phase of a project – it is becoming a more broadly applied philosophy that is practiced firm-wide (or at least, project-wide).
Similar to the way that many software engineers have embraced agile development practices that encourage continuous planning, testing and iterations, these firms are emphasizing collaboration, speed and short-term project goals. Davis describes how this technique has been modified for design teams at HDR, “At the start of a sprint, the entire team meets to determine the “backlog”—the set of tasks to be completed during the sprint. In architecture, the backlog may be a set of features to add to a parametric model or a list of outstanding items in a construction-document set.”
While some firms’ implementation of computational architecture is decidedly cutting edge, rule-based design isn’t new. In 2009 Cadalyst magazine ran an article titled, Generative Design Is Changing the Face of Architecture. The article points to the formation of the SmartGeometry group in 2001 and details how Arup used Bentley’s generative design technology to create the Water Cube aquatics center for the 2008 Beijing Olympics.
However, widespread adoption of relatively user-friendly tools such as Grasshopper and Dynamo is new – or at least can be considered an “emerging trend”. And the importance of this trend feels very real. Generative architecture promises far more than just providing CAD companies another piece of “must have” software to peddle. It’s an opportunity to use computing power in a more deliberate way allowing us to create a built environment that contributes to a more livable planet. And that is a stated goal of nearly every major architecture firm in the world.