Anatomy of Computational Building Geometry
Unveiling the Foundational Methods Beneath Complex Architectural Design
In this interview, Madeleine Eggers, Computational Design Specialist at KPF, discusses her role in integrating computational design into the firm’s architectural practice. Eggers outlines the importance of computational methods in managing complex design elements and adapting to the industry’s shift toward fluid design phases and accelerated schedules. She also provides insights into her upcoming presentation at CDFAM NYC, where she will cover the foundational methods of computational building geometry and their application in optimizing large-scale architectural projects.
Can you describe your role at KPF and how your expertise in computational design contributes to the firm’s architectural practice?
I am a computational designer with a focus on exteriors and building geometry.
In 2022, I transitioned from a design team role (fully embedded on a project for multiple phases, tasks, years) to a specialist role (doing computational problem solving for 3-4 weeks per project, juggling multiple projects at once), and then built out a small team.
The computational team balances our time about 70/30 with firm-wide initiatives and project work – in the latter, we handle the whole computational scope of a project and maintain an ongoing relationship to project leadership, working in parallel with the design team.
With the way projects are moving and schedules are accelerating, traditional design delivery methods of delineating phasing clearly is declining and giving way to more fluid boundaries between phases.
Computation is becoming more critical: articulating and detailing are no longer happening just in DD (Design Development) or CA (Construction Administration) — we might be trying to anticipate these things in SD (Schematic Design) or earlier depending on the project constraints and the way that the building is articulated may have an effect on larger design moves.
The amount of accuracy expected in early stages – competition, concept, even feasibility studies – is skyrocketing. To win competitions, we need to show more detail and have more things worked out than before.
Computational design at KPF integrates complex design elements early on, which is essential for delivering ambitious and large-scale projects.
This approach allows KPF to adapt to the industry’s shift toward fluid design phases and accelerated schedules, while staying true to the ethos of creating ambitious, cohesive, and buildable products.
We’re spotting a trend in the way our projects are working, and anticipating this change via an integrated computational design approach. To me, this is computation’s value proposition: huge projects like KPF’s require a lot of coordination, optimization, and documentation, so computational design allows us to speed up design decisions, validate them quickly, and proactively anticipate change.
Your presentation at CDFAM NYC will focus on the foundational methods of computational building geometry, specifically the codification of five core methods—data branching, point sorting, plane-based calculation, cross-referencing, and surface rebuilding. Why are these methods important for complex architectural design, and how do they support the optimization of large-scale projects?
These five core methods are instrumental to much of our computational logic and provide the underpinnings for most of the higher-order design and analytical computational tools.
I see this as a “meta” approach to computational modeling. We can streamline the automation part by breaking it down into core geometric moves, and then making those moves robust and interchangeable so we can boil down the automation portion into 5-10 chained functions. These methods are important because they form the foundational 80% of the work itself – if we can find a way to document these and reliably turn these into reusable parts, we could speed up the geometry automation and spend more time on the 20% higher-order tasks.
We are thinking about macro-level building blocks of computational modeling: interchangeable and flexible in many different scenarios like a grasshopper component, but dealing with more information and performing a specific task that we see over and over in KPF-specific contexts.
Each project is unique – we are dealing with different clients, contexts, climates, etc – but there are a lot of consistencies in how we approach them computationally. The idea is that we are not recycling whole definitions, but instead breaking it into parts that are consistent and going about our computational processes more systematically. Taxonomizing our core functions like this lets us more intentionally reuse specific processes from project to project.
Could you elaborate on how these methods are applied in the early stages of a project and how they evolve as the project grows in complexity?
For KPF, the main value add of computational design is the ability to iterate through design options in an articulated way when large design moves are still being settled. The easiest time to change the design and catch issues is early in the process, so the earlier on we model in detail and optimize over the scale of the whole project, the better.
Setting up a project computationally early on once the general formal logics are in place usually starts off as a typical geometry automation: modulation, setouts, simple rationalization, facade expression – typical computational processes. Then, in SD or DD, we take that computational foundation and use it to tackle higher-level problems that tend to be more bespoke from project to project – these are project-specific issues that will have emerged over the course of the design, such as right-sizing curved facade units for buildability within budget, overall form-finding to maximize sale value and minimizing construction cost, balancing principal views with energy code, and so on.
It’s important to note that a KPF project can be very complex from the start – our projects tend to grow in precision, or grow in constraints – and we can manage that growth through computational logic.
The beauty of these ‘atomic’ methods is that they don’t really change when the project grows in complexity – unifying planes or sorting points around a source will be the same whether you’re trying to build a simple feature wall, document the workpoints of 6,000 doubly-curved panels, or analyze ocean views from each apartment unit.
These methods are foundational processes that we keep coming back to over and over again, and end up revisiting in vastly different contexts. As our projects grow in complexity, we have to control more specifically the information we create, and this tends to go hand in hand with the data structure getting complex – we might be dealing with 5-, 6- depth data trees, each layer of which has a physical meaning.
The more complex the information we have to process, the simpler we want our processes to be. Having simple foundational functions makes our computational models more robust as the information gets more complex.
What are some of the challenges in adopting, standardizing, and scaling these core methods to ensure their robustness across diverse architectural projects, and how do you effectively communicate their value to educate clients, colleagues, and stakeholders?
Currently our priority is less about trying to build these methods as a scalable tool or market them as a product – if that happens, great, but this is more a foundational framework for how we work internally and build out a computational design practice within a larger office.
“Automating the automation” by boiling the geometry creation part into a handful of chained reusable functions sounds simple, but it’s really hard to do – sometimes the core function happens in the middle of another function, sometimes the inputs and outputs are different.
Data is grouped in many different ways depending on the project geometry and the task that needs to be done.
Are the panels grouped by face, then by floor for modulation? Or by floor, then by face for analysis? Do we have several fluid massings to modulate at once, or one massing with several tiers? Another big challenge is edge conditions – it’s simple to account for typical conditions algorithmically, but much harder to write a rule set that accounts for non-standard conditions.
This points to the importance of segmenting the functions correctly, so that we can either chain them together all the way through or do some adjustments in GH in between. In the process of creating definitions to automate geometry, we think of it as individual functions that come together to create the whole. Some of these are foundational functions, and some are more project-specific.
Importantly, some of the code isn’t packaged so neatly into a single input, single output function. As we assess our existing definitions to break them up schematically into core functions, part of the task is almost like refactoring the code – seeing if the way that we have written the functions initially is robust, or if we need to organize it differently in order to scale it up.
Currently our priority is less about trying to build these methods as a scalable tool or market them as a product – if that happens, great, but this is more a foundational framework for how we work internally and build out a computational design practice within a larger office.
Can you provide examples from specific projects where these methods played a significant role from managing geometric complexity through to manufacturing, fabrication, and project completion?
For example, one doubly-curved panelizing project we worked on had three major computational scopes at different phases: automating and rationalizing panels in SD, documenting panels in DD, and documenting frit in CDs.
These seem like different tasks, but ultimately boiled down to the core methods in the first 70% of the definition, just in a different order. Automating panel creation for design iterations heavily relied on getting the points sorted to panelize correctly and branching in order to process each row differently.
The panel and frit documentation process in CDs were similarly built off of the same foundational elements: the frit could only be mapped correctly on a rebuilt surface, the surface could only be rebuilt after separating the edge curves, the edge curves relied on the correct plane to separate correctly, the plane needed to sort and calculate the points and edge curves. Nailing down these “boring” functions took 80% of the time to get every edge condition accounted for. The actual task – mapping frit, then documenting it – was a relatively quick add-on to the definition after the setup was established.
What do you hope the audience will take away from your presentation in terms of applying these methods to their own architectural projects, and what are you looking forward to gaining from your participation at CDFAM?
I hope the audience walks away with a methodological foundation for architectural computational design work. If your practice is developing a computational design department—whether in AEC, product design, or another application—distinguishing between your core functions and your project-specific work is a good first step.
Our core functions are accountable to our processes and distilled down through iteration; having an understanding of our core framework allowed it to become the backbone of almost everything we do.
This is what we found in architectural practice – other industries may have different core functions, but the idea of laying a foundation of core reusable methods to build complexity around stays the same.
I also hope to convey the value of evaluating your computational work contextually. By decoupling what is project-specific from what is repeatable, computational design teams can create more adaptable, reusable processes that make teams more efficient in the short term, and ultimately better able to handle the changing nature of how architectural projects are designed and delivered in the long term.