The Digital in Architecture: Then, Now and in the Future

12.11.1953 min read

Authored by Mollie Claypool

Today within architecture, digital tools — from machine learning to fabrication technologies, from artificial intelligence to Big Data — are becoming more and more ubiquitous and pervasive, and quickly. Increased interest in the impact these technologies are having, and will have, in our daily lives has rapidly expanded the use of these tools in architecture schools, small, independent firms and international, corporate practices. From augmented reality for construction to 3D printing architectural models to using artificial intelligence within the design process, it is increasingly rare that an architectural project does not use some kind of digital tool either for design or fabrication. This is also the case throughout how we experience the built environment. The digital is everywhere; from the infrastructure we use to navigate the world to the objects we use to communicate.

This fundamental shift is not lost on the architecture industry. ‘In the future, digital tools will come… closer to our human bodies, enabling us to more conveniently access and utilise digital information in our daily lives,’ says architect and designer Soomeen Hahm. ‘Interactivity and connectivity to virtual data and digital information will be stronger than ever before.’

In this context, the increasing proliferation and promise of digital technologies are huge opportunities to shift our shared understandings of the world from an architectural perspective. How can the digital aid in the creation of new spatial models that are more equitable or inclusive? How have digital design and digital fabrication innovated not only designing and making, but also how we experience the built environment? Are digital tools mere methods that can solve technical problems, or can we extrapolate their potential to change the way we design, build and inhabit our world for a more sustainable future? These are just a few of the questions guiding the creation of this report.

To tackle a body of work of this scale, SPACE10 teamed up with architecture theorist Mollie Claypool and design firm Pentagram. Claypool wrote the entire piece, whereas Pentagram designed the physical version — as well as the data visualisation included within the pages. The latter was led by information designer Giorgia Lupi, and essentially consolidates the history you’ll read below into an ebbing and flowing illustration which guides you through key buildings, architectural movements, digital developments, and more. You can pick up the print version of the report and see the data visualisation at SPACE10 Copenhagen.

In the meantime, though, dig into the report below — and join us to consider how looking towards the past may help us anticipate the future.

Methodology

This report aims to describe the ways in which innovations in digital tools for design and fabrication in architecture have contributed to the way that people experience the built environment today. It does this by looking at some of the key developments in digital thinking within this industry — ranging from the late 19th century until the present day, with continuous emphasis on parametric design. Broadly, parametric design can be defined as work that is driven by parameters — where certain sets of rules inform the architectural or design output.

It may be surprising that the digital can be traced back so far in history; in fact, it has been argued by some architecture historians to have begun in the Renaissance!1 These developments have been intrinsic to the ways that designers engage with digital tools — including software and manufacturing technologies — to design and produce architecture today.

The report uses the voice of an architect trained in the US — now a theorist-historian based in the UK — to first look backwards in order to look forward into the future. It is widely recognised that in the late 20th century, the discipline of architecture foregrounded the use of digital tools and techniques ahead of every other design discipline.2 Initially adopted from the aeronautical, manufacturing, automobile, shipbuilding, aviation and animation industries, these tools and techniques rapidly proliferated within architectural design from the late 1980s onwards, forever transforming the way that architects design and realise projects.3 From augmented reality technologies to automating construction processes, the report ignites an exciting conversation about the future role of architectural design and fabrication through digital technologies.

Throughout the more historical-facing part of the report, the included architects and thinkers are references who today are continually cited by those working within the various areas of digital design and fabrication in architectural research. They are essential inclusions as they are some of the most valuable references for understanding the state of the digital in design today. This is by no means an exhaustive list of people, projects or innovations.

It is important to note that the history of architecture and design, and therefore the canon from which it draws, is ever-evolving. Digital tools have given architects and designers great opportunities to communicate their work to large, international audiences. Sharing new techniques and utilising innovations has enabled the proliferation of design techniques and processes to much wider groups of people. In contrast, previously this knowledge would have remained available to certain academies or practices leading at the helm of these developments. As a result, there are a diverse number of voices emerging today about how this history should be written from around the world. As the report moves from the 19th century towards today, it will aim to reflect this shifting landscape.

Where possible, the report tries to mitigate the underlying biases of the discipline. It’s no secret that historically, architecture has been populated by men. There are a few explanations for this, most of which are rooted in how patriarchal capitalism favours some and excludes others. For example, much like many disciplines that require long periods of study before becoming a registered architect, architecture requires long hours — which, for years, has been and is a significant barrier for some women with children. In addition, the cost of studying and qualifying also prevents many from underprivileged demographics from accessing architectural education. The relatively low pay given the workload and accrued debt prevents many from wanting to continue on in this career post-education.

This isn’t to say that historically, there were no female or minority architects. But, much like in art history, the architecture most documented and praised for its influence reflects the patriarchal context it comes from. As such, it is male work that is the most canonised and easiest to track down on a historical basis. However, now is the time to develop discourse that actively rebutts this patriarchal tradition — which is what we have tried to do throughout this report. As you’re reading, you’ll notice a diversity of voices and contexts lending perspectives to how the industry has evolved with the digital, particularly in the latter half of the report. In addition, we can recommend further reading around these topics; a good place to start is The Architecture Lobby.4

1. Origins: Morphological Thinking

Our concerns about the future of architecture in an age of digitisation have direct links to how we understand our relationship to nature. To root that understanding, it makes sense to look backwards to one of the major shifts in post-Enlightenment thinking: from vitalism to empiricism in science in the 19th century. This shift was signified by scientific and technological progress that led to greater understanding of the behaviours and mechanisms underlying human, animal and plant life. Those who believed in vitalism thought that what separates living organisms from anything ‘non-living’ is the presence of a ‘non-physical element’, like a spirit or a soul — which was also considered to be the most important aspect of that living being.5 Empiricists, on the other hand, considered all entities to be governed by similar principles and sought evidence to prove this was the case. In particular, the work of both biologist-mathematician D’Arcy Thompson and naturalist-biologist Charles Darwin enabled a leap in understanding how the environment and genetics impact the morphology of the objects and phenomena in our natural world. Darwin’s theory of evolution — detailed in his book The Origin of Species (1859) — explained that evolution occurs through natural selection caused by variations in phenotypes.6 In short, phenotypes are all observable traits of an organism — from its shape to its behaviour. (If we’re describing a bird, for example, the way it looks, flies, chirps, builds its nest or collects food are all considered phenotypes.7) Following that logic, phenotype variation can be caused by environmental factors or be genetically inherited to ensure a species’ survival. As for Thompson, his work On Growth and Form (1917) emphasised that physical and mechanical factors — also known as structuralism — are crucial aspects to consider if we want to understand the behaviour and form of all species.8 Specifically, On Growth and Form pioneered the use of mathematical models for understanding how environmental conditions cause species to transform or adapt. These models helped Thompson argue that a species’ form had a direct relationship to the forces that were acting on it externally.

Together, the work of Darwin alongside Thompson’s more structuralist thinking inspired architects to harness aspects of nature and its behaviour in their designs. In America, this notion translated into the architect Louis Sullivans’ work on the notion of functionalism — the idea that the form of a building must emerge from its functions.9 Other architects such as Frank Lloyd Wright (often abbreviated to FLW) — who had worked for Sullivan early in his career — further articulated the importance of integrating natural behaviour into architectural design through the notion of ‘organic architecture’.10 This approach was most evident in FLW’s design for the Prairie Houses — homes developed in land that was once prairie on the outskirts of Chicago. Here and in other projects of the era, architecture could be understood as an organism in harmony with its environment, from its morphology to its function.11 This was fluidly embedded in many designs of this period — from how motifs and patterns from nature were used as decoration, to how all of a building’s pieces were designed to be in relationship with one another through their unifying geometry. In short, it’s about embedding organic logic and synergy into form so that it can be experienced at every scale of design, from the details on a seat to the organisation of a home or neighbourhood.

This mentality can be summarised as ‘morphological thinking’, and it allowed architects to consider how nature’s principles could transcend all forms of architectural design.12 And according to architect, researcher and head of the Institute for Computational Design and Construction at Stuttgart University Achim Menges, morphological thinking is just as relevant today. ‘In living nature, the generation of form and its materialisation are inherently and inseparably related,’ he says. ‘Accordingly, morphogenetic design brings design and fabrication much closer together. This is a general prerequisite for tapping the full potential of digital technologies in architectural design and construction.’

2. The Proto-Parametricists

Insight into the principles of nature, and the mathematics behind these principles, hugely influenced architects in the early to mid-20th century. While they certainly did not have access to the design technologies of today, they were able to utilise morphogenetic thinking in an analogue way with whatever means they had at the time. Specifically, this led to the development of a series of works that could be argued as ‘proto-parametricist’, or using analogue means to compute form using parameters. During this period, Sullivan’s idiom ‘form follows function’ began to take on new meaning.

The Italian architect Luigi Moretti argued that if function — a building’s purpose — could be described through a set of parameters, then architects could design form using mathematical equations that relate to performative criteria.13 By performative criteria, we mean structural forces, spatial or geometric relationships, and environmental or experiential qualities such as light and air flow. Moretti proposed that the sets of relationships that emerged created the notion of ‘architettura parametrica’, in which parameters are assigned to the performance of architectural components — much like how the 0s and 1s of computer code represent certain actions.14 (Not coincidentally, he developed this notion in the period that electronic computers were first being built.15) And while it is arguably the first time the word ‘parametric’ had been used to describe how one understands relationships in the processes and forms of architecture during a period of technological innovation, it is not the first time that architects have thought in an algorithmic way.

Famously, the Catalan architect Antoni Gaudi worked computationally, but in an analogue way, in his models for the Sagrada Família (1882-1926) in Barcelona.16 He rarely used drawings as a method of design, preferring to work rigorously with physical and material behaviour. Indeed, he developed the catenary arch structure of Sagrada Família using weighted interlinking strings, which were then ‘turned’ upside down using photography and drawn up into architectural drawings.

The physical model was a tool for him to compute parts of the building over many years, creating a deep understanding of the structural and spatial relationships at play. When experienced by visitors to the Sagrada Família, these structural and spatial relationships play out in the vastness and upwards spatial movement embedded in the architecture of the church’s central nave. The geometry of its many structural columns changes and adapts to the different structural loads as they grow higher up in the space, with the church’s many curved surfaces intersecting.

Sadly, much of Gaudi’s drawings and models for the Sagrada Família were destroyed during the Spanish Civil War in 1936.17 A young Australian architect, Mark Burry, was able to piece together the complex mathematical code that underlies all of Gaudi’s models when he intensely studied the remaining fragments during the late 1970s and 1980s.18 Burry’s work brought Gaudi into an international public domain, and Gaudi’s methodology influenced many architectural designers around the world in the late 20th and early 21st centuries. This became especially apparent as computational tools further developed to incorporate physics engines — software that can help simulate physical systems — to model real-world structural behaviour, like the effects of gravity, load and weight on an object.

Others such as German architect and engineer Frei Otto further developed this method of analogue computation using models. A leading figure in the computation of structures from nature including soap bubbles and spider webs, Otto used detailed physical models to analyse, understand, document and compute how these structures were formed and performed. For the Munich Olympic Stadium (1972), he built a complex physical model: wire, string and precise imaging cameras were pointed at the model to compute the behaviour of the tensile roof structure of the stadium.19 (This project is not just proto-parametric, but also relates to ideas from the cyberneticists, explored later in the report.)

Other proto-parametricists of this period include the American architect, systems theorists and futurist Buckminster ‘Bucky’ Fuller. Fuller’s keen interest in understanding how the universe worked -— from its atoms to natural phenomena — led him to develop (mainly) prefabricated architectural projects which drew from scientific and technological engineering and innovation. From geodesic domes to inventions in modular deployable housing, Fuller advocated that through technological innovation, humans could do more with less and use resources more efficiently. In Fuller’s mind, a world that used less resources would create a more equal economy by decreasing the overall cost of products and making them more accessible to more people. This, in turn, would lead to a more sustainable and democratic future.20 His work could be seen as an architectural precedent to much of the ethos behind digital design and digital fabrication today.

3. A Cybernetic Revolution

Innovations in science go hand in hand with innovations in technology. In the middle of the 20th century, rapid technological advancement spurned by the two world wars became a mechanism for developing a greater understanding of how humans and machines are controlled by, and can communicate with, one another. At the time, the systems that this resulted in were broadly collated in an emerging field of research called ‘cybernetics’ — a term first defined by mathematician and philosopher Norbert Wiener in 1948.21 Cross-disciplinary by nature, cybernetics gathers together concepts from many fields of work including engineering, computer science, neuroscience, biology, and network theory. Hugely influential in architecture and design throughout the latter half of the 20th century until today, cybernetics sets out a theory that all behaviour, including that of humans and machines, is part of a system of feedback loops of inputs and outputs. In any given system, these inputs and outputs continuously merge together to extend the capacity of the human or machine.

Some of the concepts of cybernetics dealt with communication and machine cognition. This thinking originates in the work of Ada Lovelace, an English mathematician and writer regarded as one of the first computer scientists for her work with mathematician Charles Babbage.22 (The Analytical Engine by Babbage, to which Lovelace contributed code, is generally considered to be the first computer.23) The work started by Lovelace and Babbage set off a wave of exploration that has informed architectural design, but not before it was further developed alongside additional advancements in computer science in the mid-20th century. Alan Turing, a famous English mathematician and computer scientist, developed a ‘Turing Test’ that was used to determine whether or not a computer is capable of artificial intelligence to parallel human intelligence — forming the basis of our understanding today in deep learning in neural networks.24 Turing’s other research looked into how neurons work so that he could apply the logic of information processing to hypothetical machines.25 John von Neumann is also considered one of the first cyberneticists along with Turing. In the 1940s, his work in cellular automata — discrete, abstract computational systems that evolve through simple steps — explored concepts of self-replicating entities that can perceive and react to their immediate surroundings based on simple sets of rules.26

These innovations — the neural network and the logics behind self-replication — are at the core of cybernetic architecture and adaptive architectural systems which use information processing, machine learning and artificial intelligence. After all, cybernetics inspired architects and designers to take these ideas and use them to understand the relationship between humans and machines. They often realised these ideas by designing utopian spaces that were informed by continuous feedback from both technology and people during the 1960s and 1970s. In particular, these spaces served as architectural investigations that explored how architecture could reflect society.

Among architects working under this mentality, the most well-known is Cedric Price, one of the most visionary British architects of the 20th century. (His work has inspired a later generation of internationally recognised architects including Archigram, Richard Rogers, Rem Koolhaas and many others.) Price’s unbuilt project with cyberneticist Gordon Pask and theatre director Joan Littlewood, titled Fun Palace, was originally designed with an aspiration to become a ‘laboratory of fun’ and ‘university of streets’ in the 1960s for the east end of London.27 The design of the Fun Palace incorporated a flexible framework and programmable spaces that could change and adapt to different needs and activities. His sketch includes hanging rooms and moving floors, walls, ceilings and walkways as well as a temperature sensitive control system to create different climates and disperse fog and warm air.

Although Fun Palace was never built, it inspired Richard Rogers and Renzo Piano’s Centre Pompidou in Paris.28 The design of the Centre Pompidou is an evolving spatial diagram which provides a vast open space that can maximise flexibility and cater to different activities. In this space, people are free to wander, gaze at artworks and installations and discover the collection in the building — all without being directed to a specific pathway by the architecture itself. The ways in which a person can move through the Centre are dictated by their own wants, desires, or needs.

In the book The Architecture Machine (1972), Nicholas Negroponte and his research group at the Massachusetts Institute of Technology (MIT) envisioned the future dynamic between humans and machines as a dialogue where the machine can initially learn from the human.29 His work focuses on the evolutionary process of the design, in which machine learning would enable computers to continuously learn to produce better design. Exemplifying this idea is the SEEK project by Negroponte and the Architecture Machine Group at MIT.30 Displayed at a software exhibition in New York in 1970, the project consisted of blocks as well as gerbils. A simple robotic arm would continuously change the blocks’ positions and organise them into emergent patterns of behaviour that mimicked the gerbils’ behaviour. Negroponte argued that through SEEK, architects could understand that the robotic machine could ‘be responsible to changing, unpredictable, context-dependent human needs’, as well as require ‘an artificial intelligence that can cope with complex contingencies in a sophisticated manner’.31 This way, the robotic machine could adapt, respond and, as we will see below, eventually evolve in its performance. This research set the basis for much of the work today that looks at how robotic machines can be designed to be intelligent and adapt to different conditions or needs.

In a similar line of thought, the work of Julia and John Frazer — prominent figures at the Architectural Association (AA) School of Architecture in the 1980s and 1990s — uses generative and evolutionary algorithms as a new model for a design process.32 ‘Cybernetics will enable a new form of designed artefact interacting and evolving in harmony with natural forces, including those of society,’ John Frazer reflects. ‘All designed artefacts involve interaction with the user and the environment and can thus be understood as cybernetic systems.’ As we will see later on, evolutionary computing is today widely used in architectural design as part of a process of optimisation, or finding the best possible outcomes from combining together various performance criteria. In terms of the architectural process, this enables more flexibility: instead of a single one-off option that must be used in a design, multiple design options can be adjusted according to the needs of a project.

4. Early Digital Explorations

The economic crises and recessions of the mid-1970s and 1980s drove architects to recalibrate the way they practiced. Many architects, particularly ones embedded in the relative safety of academia, began to investigate other forms of more experimental practice and look to other industries for inspiration.

The shipbuilding, aeronautical and automobile industries had been using computer-aided design (CAD) software for several decades to design complex forms. The utilisation of these tools by architecture firms such as Greg Lynn FORM, Foreign Office Architects (FOA) and NOX transformed architectural design practice: for the first time, architects were able to achieve 3D, complex, variable curves using a type of curve called a spline instead of just straight 2D lines along an X or Y axis.

As complex forms designed with digital tools became more pervasive in the architecture and design industry over the late 1980s and early 1990s, computational tools became more essential to not only the design process but also the production of drawings. These tools enabled architects to rationalise form — to make it more efficient, but also to assist with producing information for the construction process. The American architect Peter Eisenman was an important figure in the early years of digital tools in architecture. Eisenman’s work is characterised by the manipulation of blocks and grids that are generated through abstract steps of operations; his competition entry for Biocenter (1987) in Frankfurt was one of the first projects to use computers to code design outputs.33 As Greg Lynn, who worked on the Biocenter project, later recalled, the calibration of the computer that they used was such that you could understand the processes of what it was computing — because it was iterating design outcomes at the same speed as humans would.34 In a sense, the computer was just as critical to the design process itself as the human.35

The design for the Biocenter project was inspired by biological processes and used four interlocking geometric figures with colour coding to symbolise pairs of DNA codes and their process of replication, transcription and translation.36 Using computers enabled the architect to express what he called a morphological diagram that explored possible design solutions. The project’s focus on the generation of form placed computers into the design process. This allowed for repetitive, differentiated and adaptive form-making in a way that had not been seen before in architectural design.

In 1993, Greg Lynn edited Architectural Design’s Folding in Architecture issue — the first time that an entire issue of the magazine was dedicated to exploring architecture’s ‘embodiment of the new digital technologies that were booming at the time’.37 Greg Lynn’s Embryological House (1997-2001) is one of the most emblematic post-Folding in Architecture examples of digital architecture, in which process was a fundamental part of form generation and ultimately produced fluid forms he referred to as ‘blobs’.38 Lynn’s design of the Embryological House moves away from form that is based on the repetition of the same 2D module. Instead, it explores the notion of an infinite iteration of form, generated by shared regulating principles — parameters that are embedded in spline curves.39 ‘The task that I defined with the Embryological House was to design a number of houses that were all the same, but not identical,’ elaborates Lynn. ‘The same, meaning: they were all assembled in the same way from the same number and type of parts. But not identical: every part did not need to be modular or identical in each instance. It was the idea of designing something that could unfold in its specificity without changing its structure or the underlying code.’

Lynn explored mass customisation to produce unique iterations of the house; at the same time, he experimented with CNC manufacturing to realise each of the different iterations of the house using the same methods.40 In this manner, Lynn designed over 50,000 houses — all with the same number and type of components. With this kind of model for architecture, people could customise their house according to their needs while remaining within a specific framework for design production. Two decades later, there is still little as robust as this approach in either design or manufacturing in any industry.

The American architect Frank Gehry’s influence on the use of computational tools could be said to be more broadly spread, as he used digital technologies to develop design methods as well as design software. As exemplified in one of Gehry’s first projects to use a computer, the unbuilt Lewis Residence (1989-1995), he utilised an iterative design process of physical model building and 3D modelling over years of experimentation.41 In this particular process, the starting point is building the physical model, which is later captured in a 3D digital model. Then, a physical model is again produced from the 3D model and modified with analogue, intuitive model making. After this process, the design is captured again using a 3D scanner to further inform the digital model, and continues to be worked on using analogue and digital techniques over many years.

To enable his designs to be realised with minimal alteration to his intent and to facilitate the production stages of building design, Gehry and his team created an interface for CATIA. CATIA is a modelling software originally created for aircraft industry. The software generated data that could be sent directly to manufacturers without adjusting for any specific tolerances that the fabrication machines may have. This was later developed into a separate building information modelling (BIM) software called Digital Project by Gehry Technologies. (BIM is software that manages the different inputs of various stakeholders in a design process.) Digital Project was used to design and model the building that brought Gehry into view of the wider public: the Guggenheim Bilbao, completed in 1997.42 With this project, ‘digital architecture’ for the first time reached a large, international audience. (Largely used in-house, Gehry Technologies was later acquired by Trimble in 2014, a company that owns many software companies; as a result, Digital Project was made available to the public for purchase and download in order to model and realise the complex, three-dimensional, hand-made maquettes that he used to design his iconic buildings and products.43)

5. From Virtual to Physical

As we’ve already started to see, advances in both digital and construction technology enabled architects to express and realise forms that could only have been conceptualised previously. The period of the late 1990s and early 2000s is marked by the realisation of the concepts explored in the previous decades at an architectural scale. The boom in the financial market meant that a huge amount of money was poured into architecture. Later, this would result in another recession, the one of 2008, but at the time, it was extremely exciting. Architects who had otherwise only explored their work in the form of drawings and animations, or at the scale of installations or small buildings (if they were lucky), could now compete for large-scale projects.

The exploration of what are considered more expressive forms gave rise to iconic architecture in different cities around the world. We’ve already seen how Gehry’s Guggenheim Museum in Bilbao utilised groundbreaking digital tools, but that’s not the only factor that makes it stand out: on a socioeconomic level, the museum’s expressive architectural form contributed to the regeneration of the area — so much so that it coined the term ‘Bilbao Effect’, or the idea that a building could provide economic uplift to an area due to its ‘showy architecture’ attracting huge amounts of visitors.44

Some consider Gehry to have pioneered an era of ‘technological constructions’, with technology that’s widely used as parametric design tools and BIM today.45 To find out why, we need only summarise the key innovations in his work and process. The doubly-curved titanium cladding of the Guggenheim Bilbao is celebrated as a turning point in architecture, as it could not have been built without the computer-aided design (CAD) software Digital Project. The physical output was a direct representation of the virtual 3D model. And the model included enough detailed information to be shared by architects working on the project as well as with contractors, who used CNC-milling machines and other digital fabrication processes available at the time to build the thousands of non-standardised facade panels for the project.

Different from the intuitive and artistic approach of more traditional form-making, other architects explored process-driven form-making through conceptualising functional or spatial elements as a series of diagrams — thought of as evolving models in their own right. As exemplified in the works of UNStudio, these spaces are often characterised by continuous form with series of loops, realised as a diagram to capture the spatial organisation of the building. For example, UNStudio’s Möbius House and Mercedes-Benz Museum explore the notion of twist, folding and voids, merging both plan and section to create buildings that are architectural diagrams of these concepts.46 These are continuing themes in the work of UNStudio even still today; when one experiences a UNStudio building, the diagram is made obvious through how one circulates through it. The diagram defines the ways in which people can move through space, and enables moments of visual connection between separate parts of the building.

The Yokohama International Port Terminal, designed by FOA in 1995, was at the time considered to be a futuristic design.47Realised as an over-400-metre long terminal with an undulating, intertwining series of forms and spaces, the roof of the terminal mimics a shifting landscape, where people can move seamlessly from exterior to interior of the terminal. Advances in computer-aided design enabled the concept to be realised. The complex form of the terminal was captured using detailed, ribbed sections that were then physically realised as structure. The design of the terminal in general is made up of non-orthogonal walls, floors and ceilings with non-standardised parts — parts that were repeated, but not exactly the same in each iteration. This was made possible through the digital tools used to model the terminal’s form: they allowed for arrays of similar, but varied, components. The repetition of these components, and their geometric morphing, is obvious when a visitor follows the flow of circulation through the terminal: some spaces are more open while some are more compressed within the same circulation space.

6. Collaborative Practice

Then came the Internet. And new communication technologies fuelled by its rise meant that collaboration — inherent to any architectural practice — could now happen at a pace faster than ever before. No longer did one have to wait for architectural drawings to arrive in the post, which made the design process painfully slow. Instead they could be emailed, Fedex’d and uploaded, and worked on almost in real-time by people in different locations. In the 2000s, continued advancements in scientific, philosophical and technological research led to emphasis on the importance of collective intelligence, drawing from principles in nature in both academia and practice.48

This period signified a shift from the machine age to the information age, and some architects started to expand the potential of how practices can operate by leveraging advances in information technologies. Telecommunication, the internet and the digitalisation of projects using BIM allowed some to reform their practice around networked communication, increased collaboration and collective intelligence. (On a conceptual level, collective intelligence is a new social organisation based on decentralisation and collectivity.49) With telecommunication and digital design technologies as its primary modes of communication, OCEAN was founded as one of the first geographically distributed practices in the early 1990s.50 The collaborators of OCEAN had multidisciplinary backgrounds including architecture, urban design, industrial design, interior design and agricultural science. The operation of the practice is said to have been elusive even to its own members while producing results as a collective effort. After gaining recognition through several successful competition entries and exhibitions, OCEAN expanded to multiple offices and hubs in different locations around the world, each of which eventually operated independently. One of the branches, OCEAN NORTH, with studios in Oslo, Helsinki and Cologne, remained active and became well-known for their design work. Their modes of operation, the fluid transition of individuals and dissolution of organisation highlight both the strengths and difficulties in maintaining a network-based collective — from differences in aesthetic preferences to differences in approach. This way of networked working requires adaption on behalf of each individual member, over time, for every project. Despite the challenges, this way of practicing is extremely common today — from large corporations with multiple offices worldwide to small practices being dispersed with one or two members in several cities.

Servo, led by architects Marcelyn Gow and Ulrika Karlsson, has also explored a similar organisation of practice based on network structure and electronic information infrastructure.51 Based in Stockholm and Los Angeles, their objective is to provide the conditions for collaboration between people and their built environment that can improve the quality of experiences at the urban scale. ‘Collaboration is fundamental to architectural design and architectural knowledge production,’ says Gow. ‘The exchange of ideas between members of the design team enables concepts and different approaches to a problem to be considered from multiple perspectives. Digital design increasingly calls for innovative workflows that are capable of assimilating a diverse range of specialised skills as well as collective knowledge bases.’

Today, it isn’t a rare occurrence to witness collaborations among multiple different offices for large international competitions. Perhaps one of the more famous and extensive collaborations in digital architectural design occurred in a competition in New York City in the early 2000s. United Architects (UA) was established for the 2003 competition for a new World Trade Center and brought together multiple internationally known architects including Greg Lynn, UNStudio, FOA and Kevin Kennon Architects.52 This competition for the World Trade Center reveals the success of these collaborative practices based on the potential of collective intelligence: the majority of the six finalists selected from over 650 entries turned out to be the proposals from collaborative teams including UA.53

As an alternative to the then-pervasive mode of education that focussed on tradition — emphasising the creation of a ‘signature’ design style of an individual architect — the Architectural Association’s Design Research Lab (AADRL) was created in 1997. It proposed a new kind of pedagogical approach to embrace interdisciplinary collaboration. ‘The AADRL was created out of a belief that the conditions under which architects work, think and learn today are changing in profound and unprecedented ways,’ wrote Brett Steele, the former director of the programme. ‘These demand above all a willingness to experiment with the most basic assumptions that guide not just how architects think, but also how schools, offices and other seemingly stable architectural forms are organised and operate.’54 At AADRL, outside specialists from varying disciplines such as computer programming and robotics support students who are required to carry out their research projects in teams. The pedagogical ambition is to re-frame design research to fuel innovation and to separate itself from traditional design programmes.55 At the AADRL, it’s also easier to transfer knowledge by emphasising open, accessible and shared communication similar to an open-source model. Once students graduate, their previous research is retained and shared to allow others to build on top of existing work.

This model of educating has been replicated in schools around the world since the AADRL’s inception. One of the primary characteristics in this educational model is the learning environment it fosters — ‘carefully fitted to the complex demands all architects face today in their work across networks; of collaborators, fabrication and production systems, and even [digital] design tools.’56

7. Computing Nature

For several decades academic practice had been a place where architects and designers found refuge in a weakened economic climate that had affected the building industry. As a result, academia had been the bastion of the rise of architectural theory, with design focussed on the representation — mainly through drawing, such as in the work of Peter Eisenman or Daniel Libeskind — of theoretical concepts and ideas appropriated from contintenal philosophy as well as the new generation of architectural theorists.

This highly charged theoretical environment was coupled with wide accessibility of new, exciting digital tools that enabled 2D drawings to come to life as virtual 3D models driven by procedural algorithms — set-by-step operations to find form using parameters. Developments in science and philosophy about our understanding of natural behaviour became coupled with digital techniques and tools. Digital technology allowed an evolution of morphological thinking in the 20th century, giving it new life in concepts of emergence, non-linear and self-organising systems, stimergy and agent-based modelling.57 ‘Science is opening glimpses into the invisible realm previously beyond our reach,’ says Alisa Andrasek of design laboratory Biothing. ‘It is the most important context, resource and form of thought today. Uncovering how to extract new design intuitions from it has been crucial in my quest to address complexity in built ecologies.’

The constraints of the tools that architects were experimenting with greatly informed the potential of design outputs. Industry — particularly the rapidly expanding software development sector — played great importance during this period by supporting academic research. Collaborations between academic and industry partners resulted in work which experimented with generative design processes to find new shapes and forms.58 From these, collaborators were able to design structures with high amounts of detail that were significantly informed by the limitations and potential of softwares such as parametric CAD software, GenerativeComponents. This body of collaborative, cross-disciplinary and cross-industry design research connected together parameters into complex networks from which form emerged through the changing relationships in the network over time.59

The Architectural Association’s (AA) Design Research Laboratory (DRL) and Emergent Technologies (EmTech) programmes, as well as the Delft University of Technology’s Hyperbody Group and Sci-Arc, also engaged rigorously with the notion of complex interactions between parameters and the resulting emergent patterns. The DRL emphasised an interdisciplinary approach to computationally-driven architectural design research; it touched on a wider variety of topics, while also situating itself within a long history of speculative architectural design with projects that dealt with questions of typology, space, infrastructure and urbanism. AA Emtech, on the other hand, developed frameworks for understanding the potential of emergence and natural systems in architectural design through a focus on material behaviour, biomimetics (understanding the rules that underlie the efficient of forms) and computational morphogenesis. Many of the projects developed in AA EmTech in this period were tested through physical prototyping and new technologies, developed to understand potential architectural applications of the research. This way of thinking enables much closer engagement with the behaviour of a particular environment, helping those who inhabit it to understand how architecture can respond to the rules of nature.

Later, the launch of the Institute for Computational Design (ICD) at Stuttgart University in Germany combined this approach with research into novel fabrication technologies. Often tested at the 1:1 scale of a pavilion, ICD’s work continues to this day with industrial and mobile robotics.60 In particular, the institute works with cyber-physical systems that link together artificial intelligence, material sciences and automated manufacturing into new kinds of frameworks for architectural production.

8. Parametric Explosion

For the last decade or so, one of the ongoing debates amongst architects interested in the potential of digital tools and technologies is around whether digital and parametric design tools are merely a means to an end, eg. the ‘how’ something gets designed.61 Or, are these tools themselves embodied with social and political discourse? Are they symbolic or even operative of the ‘why’ and ‘for whom’ of a design? Today, it is apparent that the latter is inextricably true given developing discussions around uses of artificial intelligence, data privacy, social media and the future of automation in the media. But in the late 2000s, there was slightly more naiveté amongst the vast majority of architects who were, by then, using digital tools in their practice.

Let’s first look at the tools of digital design in the late 2000s. Perhaps one of the more important moments in the evolution of digital design tools was the release of the tool Grasshopper. Designed by David Rutten in September 2007, it is now a plugin for a common design software called Rhino. Grasshopper uses a visual, node-based component interface to create generative algorithms that can be used to create 3D geometry and other functions. The simplicity and ease of the Grasshopper interface in comparison to other available programming languages quickly appealed to many digital designers for its drag-and-drop, on-and-off, input-output system. Grasshopper instigated an explosion in generative design tools: Ladybug, Honeybee, Geco, Kangaroo Physics, Karamba, BullAnt, Hummingbird, Heliotrope-Solar, Mantis (yes, almost all named after animal species). The outputs of many of these tools are recognisable to well-versed architects today.

While these tools are excellent for form-generation, structural and environmental analysis and the simulation and optimisation of forms, they cannot be the main driver for an architectural project — they are only a component of what a building is comprised of. Furthermore, architecture embodies not just technical, structural or mechanical issues but a range of social, political and economic qualities and conditions. Following that logic, even architects’ tools themselves — like Revit, Grasshopper, Rhino and other software — bring particular socio-political implications to the ‘why’ and ‘for whom’ of an architectural project.

Parametric design has proven to be ample ground for the exploration and theorisation of this problem. While the first digital generation of architects was interested in how science and innovation could enable new forms of architecture to emerge from generative digital design, more and more architects are now exploring the notion of parametric design as embodying ideology.

Patrik Schumacher of Zaha Hadid Architects is one of the more prominent voices arguing for what he named in 2008 as ‘Parametricism’. According to him, parametricism is ‘the great new style after Modernism’, a paradigm emerging ‘from the creative exploitation of parametric design systems in view of articulating increasingly complex social processes and institutions’.62 The alignment here with society, style and digital tools made waves within the architecture discipline, and has been heavily debated over the last decade since. One of the more prominent areas of critique of Parametricism as a style has been how ‘digital architecture’, when realised, is, or isn’t, sensitive to contextual issues, like how buildings deal with local culture.63 ‘Parametricism has no sympathy to local culture until the local culture becomes part of the parametric system,’ says Lei Zheng, architect at ZHA. ‘It is a product of the global economy and negotiates specific boundary parameters resulting in new opportunities, gentrification and reformation.’

From this notion, a second critique has emerged around how parametric architecture is designed and then realised. Often, the complexity of form in buildings by parametric or ‘digital’ architects such as Hadid has demanded the use of overly expensive and inefficient production methods — running over time and over budget, and thus wasting huge amounts of resources.64 ‘Parametricism is short of evidence that it actually works,’ The Guardian architecture critic Rowan Moore has written. ‘It rests on the unproven belief that it is possible to mold architectural forms perfectly to the complex and unpredictable uses they will contain. It is supposed to be adaptable, fluid, responsive and connective with its surroundings, but most parametric buildings so far tend to be the opposite.’65

Indeed, these ‘parametric’ buildings cannot use existing prefab methods of construction: they need entirely bespoke production chains, which renders them extremely expensive, one-offs. As a result, many of these buildings have been commissioned by wealthy patrons who wish to have iconic buildings to represent their companies, or even their countries.66 This creates an oxymoron: despite the ethos of parametricism being rooted in democratisation and collaboration, the buildings it has resulted in aren’t all that accessible for many everyday people. These problems have been more operatively critiqued in recent years through the emergence of a new body of digital work in architecture, detailed in the following sections.

9. Augmenting Reality

Contemporary culture was changed radically by the Internet and other communication technologies. As technology became more accessible in the 2000s, particularly hardware and sensor technologies, so did the sense that architecture could physically be as performative and vibrant as the algorithms and simulations that architects used in the design process. Digital tools enabled architecture to embody fluidity, temporality, movement and change — which, in turn, also transformed how people move through and interact with their built environment.67 All of this became a mechanism for gaining a new understanding of space. Architects explored to what extent physical architectural elements could respond and adapt to people’s behaviours, changing needs, or even cultural, programmatic or environmental conditions. As part of that exploration, architects began to augment one’s experience of the built environment, often in real time.

One of the first interactive walls developed was the Aegis Hyposurface by dECOi architects, made in collaboration with architect Mark Burry (of Sagrada Família fame) in 2001. The project used almost 900 pneumatic pistons to control metal components in a wall that moved in real time according to changing environmental conditions such as movement or light. From a viewer’s perspective, the mechanical aspects of the project were hidden behind the undulating, moving, triangulated surface — although one could hear the pistons working when activated by changing environmental conditions. As Burry wrote, the Hyposurface represented a shift in understanding space — from determinant and fixed to indeterminate and temporal.68 But how did this idea of a mechanical system that responds to contextual conditions make its way to the scale of a building? With Jean Nouvel’s Institut du Monde Arabe, built in Paris in 2001. The building’s facade is made of a mechanical, metal brise soleil (sun shade) that opens and closes according to environmental conditions.69 This dynamism enables people inside the building to understand how their behaviour — either individually or collective — triggers reactions in the architecture around them.

Yet the Hyposurface was extremely expensive, technology-heavy, and only worked for a few minutes before its pneumatic system would shut down. Within the decade, however, technology became much more lightweight, effective and affordable. Spaces could become embedded with technologies that were activated by human presence — either by touch, movement, or sound. The work of Canadian architect and academic Philip Beesley, particularly the project Hylozoic Ground for the Canadian Pavilion at the Venice Architecture Biennale in 2010, utilised components that were laser cut out of lightweight plastic and hung in a mesh from the ceiling of the installation. The components were part of a larger network of shape-memory alloy, light and touch sensors and microprocessors that responded to human interaction and behaviour. As one moved through the space the installation would ‘create waves of empathetic motion’ that ‘shivered’ through the installation, pulling visitors into a spatial experience that responded to their interaction with it in an almost human-like way.70 The technology was embedded, almost hidden, within the installation — leaving visitors to the exhibition curious as to how it could possibly respond to them.

10. Digital Fabrication

Imagine a world where large, monolithic factories churning out millions of objects, to be shipped around the world on vast transportation networks, didn’t have to exist. In their place: small-scale digital fabrication machines like 3D printers that could fit into your home or office, enabling you to make whatever objects you wanted or needed — from household items to furniture to your own home. A shift from consumerism to prosumerism — where the consumer is also the producer — enables a vast transformation to take place in how we make the objects around us. This transformation is on its way to meeting its full potential because of a revolution in digital fabrication.

The first open-source desktop 3D printer, Darwin — the name of which is not a coincidence — was released over a decade ago, in 2008.71 It exemplified the idea of digital fabrication for the prosumer. While 3D printing was not a new idea — it had been a topic of study in work on stereolithography since the mid-1980s — the concept behind Darwin was revolutionary.72 Darwin sold for $200, an affordable and accessible price for many people. It was also capable of making the vast majority of its own parts. Once completed, it could essentially copy itself, and its copy could copy itself, and its copy’s copy could copy itself, as well as make other objects. It was what is called a self-replicating machine. The idea for Darwin originated in the cybernetics of von Neumann, described earlier, as well as in the more contemporary open-source community. They argued for freedom of access to information and were adamant that technological tools needed to be at the forefront of our societal concerns, in contrast to the privatisation of data and tools common in a capitalist market.73

Indeed, digital fabrication technologies such as CNC-milling machines, laser cutters and 3D printers challenge the very mechanisms of a consumer-based market. These technologies enable people to quickly reproduce parts for objects, or entire objects themselves, for much more affordable prices than more customised or handmade objects. Furthermore, they enable the customisation of parts relatively easily using the principles of parametric design. The affordability of these machines generally follows Moore’s law, or the notion that as computing power increases exponentially, it becomes more affordable and therefore more accessible. The Columbia University Professor Hod Lipson gives an account of a possible future of digital fabrication technologies, describing a world where they pervade every aspect of one’s daily life:

Place: Your Life
Time: A few decades from now
… even in the future, it is hard to get up in the morning.
The smell of freshly baked whole wheat blueberry muffins wafts from the kitchen food printer. The cartridges to make these organic, low-sugar muffins were marketed as a luxury series. The recipes were downloaded from different featured artisan bakers from famous restaurants and resorts.
The first time you showed the food printer to your grandfather, he thought it was an automated bread machine — an appliance from the 1980s that took foodie kitchens by storm. He could understand why you wanted to print processed food until his anniversary came. To celebrate, you splurged on deluxe food cartridges and printed him and your grandmother a celebratory dinner of fresh tuna steaks, couscous and a wildly swirled chocolate-mocha-raspberry cream cake with a different picture within every slice.74

For now, these tools are extensively found in manufacturing and design industries and tend to be inaccessible to the public. Yet they have continued to increase in their accessibility, and therefore their influence. One of the driving forces behind this shift is the fablab. A fablab is a place, usually in a city, where computer-controlled technologies, and specialists in using those technologies, are accessible to the public. The fablab was brought to the forefront of the design community by MIT Professor and Director of the Centre for Bits and Atoms Neil Gershenfeld. In his 2012 article in Foreign Affairs, ‘How to Make Almost Anything’, he stated that a ‘new digital revolution is coming, this time in fabrication’.75 Today, approximately 1,500 registered ‘fababs’ exist around the world, although the number is likely much higher when including more informal ‘maker’ spaces that are not registered on the Fab Foundation website.76 They are located from the US, where the fablab movement originated, to South Africa; the only continent that doesn’t have a registered fablab is Antarctica.

WikiHouse (2011-present) is one of the more well-known architectural projects to harness the potential of distributed manufacturing using digital fabrication technologies.77 Started in 2011 by Alastair Parvin, Nick Ierodiaconou and Indy Johar of UK-based practice Architecture 00, WikiHouse aims to put ‘low-cost, low-carbon buildings into the hands of every citizen, community and business.’78 Architecture 00 has proposed that digital fabrication can enable houses to be fabricated and assembled much more efficiently and cheaply than what is possible with typical methods of production. To achieve this, they introduced the WikiHouse building system: using Creative Commons licensing and a single CNC-milling machine, it requires little knowledge of how to design, fabricate and assemble small homes. All that is needed is internet access — to download the user manual files — and some training in using the CNC-milling machine to mill timber sheet material. The resultant CNC’d building parts are then fit together very simply, using pegs.

WikiHouse has received much critical acclaim from the media. However, its limitations as a system have inspired many architects and designers to ask questions — from how it deals with materials, resources and the environment, to issues of design complexity, scalability, knowledge-transfer and labour.

The African Fabbers Project is one of the more clear proponents for dealing with some of these issues.79 Instigated by Italian Architect Paolo Cascone, the African Fabbers Project merges local building practices with technological systems and digital thinking. The project sits in direct response to a project like WikiHouse, asking: what if you need to change material? What if you are required to use locally-available resources? What if you want to be able to provide more opportunities to people to earn money? Through a ‘learning-by-doing’ approach, Cascone’s work brings together open-source hardware technologies, digital fabrication tools and self-build practices to propose an educational network that enables the creation of jobs for people in a continent which has significant issues with un/underemployment. The African Fabbers Project proposes a solution utilising digital thinking in combination with important societal issues and traditional knowledge.

One of the most revolutionary ideas about the future of digital fabrication in design can be found in the work of the architect and researcher Nadya Peek who studied, and works, with Gershenfeld at MIT. Peek has written extensively about the potential of the fablab model and digital fabrication machines as tools for democratising design, widening participation, and enabling sustainable and inclusive infrastructure for production.80 She has recognised that the machines we use to make machines are typically highly inaccessible since they are located in factories, but adds in that they are also highly restrictive in what they can do. Peek’s position to the status quo of digital fabrication is clearly articulated when she writes:

‘The original user model for digital fabrication assumed the machines would be operated by an insubordinate workforce with no interest in improving the technology they were being replaced with. The tool-maker was separated from the tool-user.’81

Peek argues that if we want to ensure the promise of a digital revolution in fabrication comes true for everyone, digital fabrication tools need to be rethought of as machines that can make almost any other machine for a much lower cost. Embracing this mindset would ensure that these machines can make something that reflects the environment — social, material and technological — in which objects, or in our case the elements that make up buildings, are produced with the resources available. This would enable a diversity in tools to be produced and become accessible to more people.82 The distribution of manufacturing tools once silo’d into institutions into the lives of everyday people holds huge potential for the future.

11. Robots

Robots have been part of our collective cultural consciousness for a long time, yet they mean different things to different people. Some think a washing machine is a robot, some think Rachael from the film Blade Runner is the most exemplary version of one. Yet whatever emotions, feelings or futures the notion of a robot brings to a person, it is undeniable that robots are here to stay and will become ever-more present in our daily lives in the future.

First used in the 1920 play R.U.R. (Rossum’s Universal Robots) by Karel Čapek, the term robot was used to describe ‘artificial people’ that could be more mechanically perfect, and therefore more intelligent, than humans.83 Later, the first industrial robotic arm — called Unimate — was created in 1954 by George C. Devol Jr., the American inventor.84 Unimate was quickly adopted and used to replace human labour in manufacturing — first in oft-dangerous diecasting processes in a General Motors factory, and later, for many cumbersome, laborious tasks such as welding, lifting and stacking. Unimate was hugely impactful in the automobile industry and was licensed to other companies such as Nokia in Japan, which enabled this technology to be present in multiple markets around the world.85

In the early 2000s, the highly competitive market around industrial robots lowered their manufacturing costs, which made them more widely available to architects and designers. As for the robotic manufacturing companies themselves, they, too, began to look for alternative industries to engage with. As such, architects had the chance to ask: how could the robotic arm replace or enhance human labour in design? How could robots amplify the experience of space? How could they aid the construction process?

One of the first contemporary speculative architectural proposals to use an industrial robot arm was R&Sie(n)’s Olzweg, the second place winner in the competition for the FRAC Orléans courtyard in Orléans, France in 2006.86 This proposal took on aspects of British architect Cedric Price’s 1961 Fun Palace in its emphasis on an ever-changing space where technology and humans interacted together. In Olzweg, a robotic arm would have been placed in the courtyard on a moving platform; it would have perpetually constructed a space made of recycled glass elements by moving and sliding them in and out of place. In the proposal, RFID-tagged visitors were meant to navigate with cell phones through the always-evolving building, ‘self-reprogramming and progressive[ly] [adapting] as they moved’.87 In this work (and others since), R&Sie(n) presented a framework for architectural production that embedded a robotic arm at its very centre while reconceptualising the role of the human inhabitant.

Although R&Sie(n) were unsuccessful in realising this project, its legacy was impactful and inspired a diverse body of work incorporating the industrial robotic arm into architectural design. Half a dozen years later, Gramazio Kohler Research from ETH Zurich began to physically test the potential of a robotic arm to achieve complex curvature. In The Programmed Wall (2006) an industrial robot was used to pick and place bricks in a distributed array to create porous, doubly curved surfaces.88 The industrial robot was positioned similarly as in Olzweg, on a track that allowed it to move incrementally in order to position the bricks. In 2017, this technique was used to construct the front wall of the Chi She art gallery in Shanghai by Archi-Union Architects. Also in 2017, construction startup Construction Robotics launched SAM, the bricklaying robot, which ‘increased masons’ productivity by 3-5x while reducing lifting by up to 80 percent.’89

Certain limitations arise out of design experiments using programmed industrial robots to assemble architectural elements instead of people. The first are the limitations of the machine itself. It cannot move without instruction — i.e. being programmed — and it is restricted to its radius and the number of axes it moves along. Furthermore, it requires an end-effector — the device at the end of a robotic arm that makes the robot capable of performing an action, like a hand at the end of your arm. The end effector is designed and manufactured specifically for the task it is meant to do — so it requires customisation. Industrial robots can’t really do anything on their own without end effectors or programming of some kind. Also, it is important to note that in these two models, the robots replace human labour.

In recent years, this critique has spurred a huge volume of work in architecture thinking about how to deal with these issues of mobility, labour and customisation. The work of the Institute for Computational Design and Construction at the University of Stuttgart, led by architect Achim Menges, has developed what is referred to as a cyber-physical approach. Here, the relationship between virtual and physical data is interlinked using both robotic technologies as well as sensor technology. In the BUGA Fibre Pavilion 2019, construction principles found in nature — harking back to the morphogenetic and biological principles from the early 20th century — are combined with advanced robotic technologies and fibre composites, achieving an architectural form that is expressive, lightweight, and structurally efficient.90 This integrative and holistic approach places the robot within a large framework of construction, while at the same time utilising it for things that humans would either find too difficult or tedious to do.

Maria Yablonina, who studied under Achim Menges at ICD Stuttgart, has taken this approach one step forward with her 2015 project, Mobile Robotic Fabrication System for Filament Structures.91 In this project, she designed a series of mobile, task-driven, wall-climbing robots that operate semi-autonomously within a larger construction framework, weaving filament, or threadlike, material together to form structures.92 ‘In the past decade, we have seen how autonomous mobile robots designed to work alongside humans have significantly changed the way human labour is performed and distributed in many industries,’ Yablonina offers as rationale. ‘This shift has not yet happened in construction and architecture, but that is undoubtedly only a question of time. In the meantime, we would need to not only consider a range of tasks that would be the most convenient for automation but also inquire into the ways this technology might change the architectural landscape beyond construction, potentially impacting the way we occupy built environment.’ Yablonina envisions small robots working safely alongside humans as companions while remaining within the architectural space — ’continuously performing construction and spatial reconfiguration tasks in response to their human co-habitants.’ But she also points out that it’s crucial to ask questions around issues of ownership and decision-making. ‘If we are to co-live with architectural machines, who controls the way they make decisions and collect data?’ she asks.

This notion of semi-autonomous robotic collaboration is a topic that many architects and designers today are developing for different applications and contexts. Other work looks at the ways in which autonomous drones and other manufacturing technologies can be used to construct large-scale architecture and infrastructure. Drone technologies have the potential to be operated semi-autonomously or fully autonomously, depending on their programming and the particular context they are operating in, and be used to navigate sites not fit for human inhabitation.93 Other work in this area rethinks the modular semi-autonomous robot as part of a larger reconsideration of architecture’s permanence, such as in the work of Design Computation Lab (DCL) at the Bartlett School of Architecture, UCL.94 DCL has explored the potential of semi-autonomous mobile robots through designing robots which are similar to the building pieces that they collaboratively arrange together, responding to the changing needs of people in real-time.

12. Radical Rethinking

All of this work exploring the potential of robots in architecture would be impossible if not for the revolution in information and data technologies in supercomputing and artificial intelligence in the last decade, or what has been referred to as the ‘Big Data’ revolution. Big Data, or using extremely large sets of data for computational analysis, has found kinship with the digital revolution of 2012 onwards in architecture.95 This has been referred to as the ‘second digital turn’, where a new scientific intelligence became embedded in architectural thinking.96 The parts that make up and compose architectural elements could be seen at a resolution and detail never understood before. This catalysed architects towards rethinking what buildings are made of (from their parts to their materials) and how they are interacted with (from design to production to how they are experienced). Architect Alessandro Bava writes: ‘How could these innovations in computing be used to better understand a building’s environmental performance, or the best way to design urban planning interventions, or production and construction processes? How could artificial intelligence including machine learning enable architects to design novel kinds of architecture that can better respond to the changing world around it? How can digital tools enable architects and designers to create better architecture for more people?’

Furthermore, the financial crash of 2008 significantly barred many graduating architects from getting work in practice. Academia responded to this by supporting younger teachers and practices, often working multiple academic jobs to get by, in the procurement of digital design and fabrication technology that had become significantly more affordable. Access to these technologies, in combination with the urgent need to learn from the failures of previous generations in a post-2008 environment and scientific innovations in data and computation, meant that this younger generation had the potential to rethink the role of the architect. They could reconsider what architecture was made of, what it was meant to do, and who it was meant to serve — resurging a sense of socio-political urgency in the industry.

13. The Discrete

Integrating socio-political awareness and critique into architecture is important. And because digital technology is readily available, for very low costs, there is an entire generation of architects and designers who were brought up to be highly literate in these technologies. As a result, this is the first moment where social responsibility and digital and automated technologies have the potential to be accessible to everyone. What architects dream of can now come to life more easily than ever before. So what does architecture activated by a sense of social responsibility, combined with the most advanced digital technologies, look like?

The Discrete is an emerging body of work that rethinks the basic building blocks of architecture.97 At the core of the Discrete is the wish to ‘redefine the entire production chain of architecture by accelerating the notion of discreteness in both computation and the physical assembly of buildings’.98 Architecture in a Discrete approach is understood as being made of a self-similar, serialised and repeatable kit of parts that can be combined in many different ways. The Discrete is catalysed by today’s ability to compute design possibilities through a finite set of rules more quickly than ever before, building in parameters that can be tectonic, environmental, material and importantly, socially-aware and participatory. ‘Architecture, as an industry and discipline, is by necessity redefining its role in the built environment and in aesthetic discourse,’ explains architect and theorist Viola Ago. ‘The systematic infrastructure of architecture as a practice has been absorbed by larger corporations. As a result, young practices are often placed at a standstill for lack of integration or effective entrance mechanisms to the corporatisation of architecture. In these economic and political shifts, working with Discrete component logics — components that assemble to some larger whole — is incredibly liberating for emergent young designers and architects. It gives them an opportunity to be active participants in the evolution of what it means to practice architecture in our current, post-capitalist culture.’

As Ago touches upon, the Discrete upends the traditional paradigm of architecture being made out of fixed parts that serve singular functions. Kits of Discrete architectural parts are instead combined through their constraints to discover their functions and possibilities. They are also able to be disassembled, and combined in different ways. In this way, there are no predetermined hierarchies but possibilities embedded in the design of each building block. Function only emerges from these combinations and accumulations. The Discrete argues that rethinking these basic building blocks with the ability to change over time enables greater equity and democratic thinking in architecture throughout all stages of production. This provides an architectural framework that is more relevant to today’s urgent issues which pertain to us all — issues like climate change and migration. Furthermore, the Discrete provides a more adaptable and agile framework for architectural production. Designers can work with clients, or people, to be able to consider the possible changes required over the course of the life of a building, or person, throughout the design process.

Some of this work is being developed in the form of video games, as video games are a participatory platform accessible to many different kinds of people, often from the privacy of their own homes. The game Common’hood by Jose Sanchez of Plethora Project imagines a post-scarcity world, where the players ‘grow their economy and their community.’99 This is done through the use of digital fabrication tools in fablab-esque environments with CNC machines and platforms for communication and interaction, such as online marketplaces. In Common’hood’s world, players are engaged, empowered, and given the opportunity to have agency with the tools they use to construct environments according to their own needs. This example represents a quality that runs throughout The Discrete: here and elsewhere, it is a means of critiquing top-down parametric models of design and production.

14. What’s Next: Construction Goes Digital

A recent report by the McKinsey Global Institute showed many around the world a fact that those within the architecture and construction industries had known for a long time: construction is one of the least digitised industries in the world, second only to hunting.100 Furthermore, in the construction industry, productivity has not risen since World War II.101 While architectural design practices have been using digital tools for over 30 years now, construction has tended to remain profoundly analogue, reliant on semi-skilled or unskilled manual labour on a building site. This means that construction is an industry ripe for the integration and use of more digital technologies. With this, the industry’s productivity would increase, jobs would be created, and the everyday person would be more connected to the digital in the built environment.

This integration of digital tools and automated technologies into building practices has become ever-more urgent in light of the agility that will be required to cope with the effects of climate change, including the increased mobility of people and reduction in material and human resources. Architecture that could accommodate more people in the event of mass migration, or construction practices that could efficiently utilise local resources instead of relying on global supply chains, are possible results of digitising the production of the built environment.

However, this isn’t to say that digital innovation hasn’t been worked on at all within the industry. Since 1978, Japanese construction companies such as Kajima, Kumagai Gumi, Obayashi, Taisei, Takenaka and Shimizu have been developing automated construction technologies, spurred by a significant predicted labour shortage in the country.102 Today, Japan remains the forerunner of integrating automated construction technologies into the industry. The focus of much of this innovation has been on task-specific automation, from robots that lay tiles to those which assemble ceiling elements or trowel concrete.103

Various construction technology companies around the world have begun to learn from Japanese innovation as the skills shortage experienced in Japan also becomes more commonplace in many different countries. For example, American company Built Robotics has developed autonomous construction vehicles to move earth and other material resources around a construction site.104 The Autonomous Manufacturing Lab at UCL and Google have harnessed drone technologies to survey sites that are either too precarious in terms of their environmental conditions for human surveying or are extremely large and thus would take too many human resources to survey.105 They also utilise drone technologies to deposit construction materials.

Many of these innovations are centred around the replacement of human labour using automated technologies. Other developments in automating architectural production are focussed not on automating the construction site itself but moving many of the processes that would take place on a construction site into a factory setting. These developments focus on disrupting macro-organisational and logistical issues that are embedded into many traditional construction practices; for example, AR (augmented reality) is increasingly used to deal with the imprecision inherent in construction.

Katerra, one of the first construction start ups to be an investment ‘unicorn’ (i.e. having received investment of over 1 billion USD), has developed a model of factory-made architecture.106 Parts are designed, manufactured and assembled by a single company in a factory, similar to Apple with computers or iPhones. While this is an idea that has been around since Le Corbusier’s Maison Dom-Ino in 1914-1915, factory-produced architecture has been made much more possible by advances in automation. Similar in set up to a Tesla factory, Katerra can produce prefabricated architectural elements and buildings at an incredibly fast rate, streamlining often disparate practices that exist on a traditional building site to one location — its factory.

In terms of logistics, some companies are looking to streamline construction using automated technologies such as platforms and web applications leveraging artificial intelligence and machine learning. Companies such as Procore link together all the various stakeholders in a single project into one platform.107 The aim here is to make decisions and processes more efficient and transparent — processes which traditionally used to be opaque and are often sources of disputes on construction sites.

Other projects look at automation as a way of engaging inhabitants in the production of the urban environment. Sidewalk Labs, a project by Google in Toronto, Canada, collects data from the city’s inhabitants to improve infrastructural decisions and mobility at the urban scale.108 While at an ethical level this project has been much debated by architects around the world, it highlights the notion that the built environment of tomorrow may be one where our interaction with automated systems may need to be become much more transparent.109

It also commercialises the everyday person’s movements in the urban environment. Here, the work of T.F. Tierney is particularly interesting, as she writes that in this project we see a shift from a citizen-based model to a consumer model for urban planning, where all citizens’ ‘personal and environmental data is an economic resource.’110 This is a powerful shift — where an inhabitant in a city becomes a resource for a private corporation’s design of the urban environment around them.

15. Digital Transparency

Digital thinking, tools and technologies are extremely powerful and important for all people today — and into the future. As architect and researcher Valentin Soana has stated, the digital in architectural design enables ‘new systems where architectural processes can emerge through close collaboration between humans and machines; where technologies are used to extend capabilities and augment design and construction processes.’ This enables a movement beyond ‘top down approaches, in which design decisions are made based on human biases and limitations; technologies will help us better understand social dynamics, materials, structural systems and formation processes. More than productivity gains, we’ll rethink the way we live and the way we make decisions — and ultimately how we articulate our built environment.’ As these tools become more accessible to the everyday person on a daily basis, it is important that designers and tool makers are open about the ways in which they are used — for what, and why. This transparency and openness about the power of digital technology and the production of the built environment is necessary for better serving all people and designing a more equitable world.

References

1. Carpo, Mario. The Alphabet and the Algorithm. MIT Press, 2011.

2. Carpo, Mario, The Second Digital Turn: Design Beyond Intelligence, MIT Press, 2017, 3.

3. Lynn, Greg. Animate Form. New York, NY: Princeton Architectural Press, 1999.

4. Deamer, Peggy, Marianela D’Aprile, Douglas Spencer, Eva Hagberg Fisher, and Dan Howarth. The Architecture Lobby. Accessed October 1, 2019. s10.io/archlobby.

5. Bechtel, William and Robert C. Richardson. “Vitalism.” Routledge Encyclopedia of Philosophy. London: Routledge. 1998.

6. “The World of Darwin: Darwin’s Observations.” BIOdotEDU. Brooklyn College. Accessed October 3, 2019. s10.io/darwob.

7. Darwin, Charles. The Origin of Species. London: Vintage, 2019.

8. Thompson, D’Arcy Wentworth. On Growth and Form. Cambridge: Cambridge University Press, 1968.

9. Sullivan, Louis H. “The Tall Office Building Artistically Considered”. Lippincott’s Magazine (March 1896): 403–409.

10. “Organic Architecture.” Guggenheim, November 17, 2016. s10.io/guggorganic.

11. “Frank Lloyd Wright Trust.” The Prairie Style | Frank Lloyd Wright Trust. Accessed October 3, 2019. s10.io/prairie.

12. Menges, Achim. “Performative Morphology in Architecture.” SAJ 5 (2012): 92–104. s10.io/mengespermorph.

13. Gallo, Giuseppe & Pellitteri, Giuseppe. (2018). Luigi Moretti, from History to Parametric Architecture. 10.13140/RG.2.2.28349.15842.

14. Ibid.

15. “The Augmented Human: A Research Report by SPACE10.” SPACE10. Accessed October 1, 2019. s10.io/augmentedhuman.

16. Claypool, Mollie. This is Gaudi. Lawrence King Publishing, 2017.

17. Claypool, This is Gaudi.

18. Burry, Mark. “The Analects of Gaudí.” MARK BURRY, April 22, 2018. s10.io/burrygaudi.

19. Nerdinger, Winfried. Frei Otto Complete Works: Lightweight Construction, Natural Design. Basel: Birkhäuser, 2005.

20. Fuller, Richard Buckminster. NINE CHAINS TO THE MOON: an Adventure Story of Thought. BIRKHAUSER, 2019.

21. Wiener, Norbert. Cybernetics ; or, Control and Communication in the Animal and the Machine. Cambridge, MA: The MIT Press, 2019.

22. Babbage, Charles. The Analytical Engine and Mechanical Notation. London: W. Pickering, 1989.

23. Ibid.

24. Teuscher, Christof. Turing’s Connectionism: An Investigation of Neural Network Architectures. Springer Science & Business Media, 2012.

25. George, F.H. Philosophical Foundations of Cybernetics. Tunbridge Wells: Abacus Press, 1979.

26. Neumann, John von. The Computer and the Brain. New Haven: Yale University, 1958.

27. Price, Cedric. “Cedric Price. Fun Palace for Joan Littlewood Project, Stratford East, London, England (Perspective). 1959–1961: MoMA.” The Museum of Modern Art. Accessed October 1, 2019. s10.io/funpalace.

28. Mathews, Stanley. “The Fun Palace: Cedric Price’s Experiment in Architecture and Technology.” Technoetic Arts 3, no. 2 (January 2005): 73–92. s10.io/funpalace2.

29. Negroponte, Nicholas. The Architecture Machine: toward a More Human Environment. Cambridge, MA: MIT Press, 1972.

30. Negroponte, Nicholas. “Semantics of Architecture Machines.” Architectural Forum, October 1970, 40.

31. Ibid.

32. Frazer, John. “Creative Design and the Generative Evolutionary Paradigm.” Creative Evolutionary Systems, 2002, 253–74. s10.io/frazerdesign.

33. Abrahams, Tim. “Computers in Theory and Practice.” Architectural Review. Accessed October 3, 2019. s10.io/compthry.

34. “Peter Eisenman in conversation with Greg Lynn.” CCAchannel. YouTube. YouTube, April 12, 2013. s10.io/eisenmanlynn.

35. “The Foundations of Digital Architecture: Peter Eisenman.” CCAchannel. YouTube. YouTube, May 21, 2013. s10.io/eisenmanfndtns.

36. Eisenman, Peter. “Biocenter 1987.” EISENMAN ARCHITECTS. Accessed October 1, 2019. s10.io/biocenter.

37. Lynn, Greg, ed. “Folding in Architecture.” Architectural Design 63, no. 1-4 (1993). s10.io/foldinarch.

38. Lynn, Greg. Folds, Bodies & Blobs Collected Essays. Bruxelles: La lettre volée, 1998

39. Ibid.

40. Shubert, Howard. “Embryological House.” CCA. Canadian Centre for Architecture. Accessed October 1, 2019. s10.io/embryohouse.

41. “Yale Exhibition Highlights Pioneers of Digital Architecture.” YaleNews, January 7, 2014. s10.io/yaledig.

42. “How Analog and Digital Came Together in the 1990s Creation of the Guggenheim Museum Bilbao.” Guggenheim, October 20, 2017. s10.io/andigbilbao.

43. Winston, Anna. “US Firm Buys Frank Gehry’s Technology Company.” Dezeen. Dezeen, May 7, 2015. s10.io/gehrytechsold.

44. Moore, Rowan. “The Bilbao Effect: How Frank Gehry’s Guggenheim Started a Global Craze.” The Guardian. Guardian News and Media, October 1, 2017. s10.io/bilbaoeffect.

45. Nero, Irene. Transformations in Architecture: Frank Gehrys Techno-Morphism at the Guggenheim Bilbao. Köln, Germany: Lambert Academic Publishing, 2008.

46. “Mercedes-Benz Museum.” UNStudio. Accessed October 1, 2019. s10.io/mbenzmus.

47. Langdon, David. “AD Classics: Yokohama International Passenger Terminal / Foreign Office Architects (FOA).” ArchDaily. ArchDaily, October 17, 2018. s10.io/yokintnltrm.

48. Latour, Bruno, Steve Woolgar, and Jonas Salk. Laboratory Life: the Construction of Scientific Facts. Princeton, NJ: Princeton University Press, 2006.

49. Ibid.

50. “OCEAN Design Research Association.” OCEAN Design Research Association. Accessed October 1, 2019. s10.io/oceandesign.

51. “Servo Stockholm.” servo stockholm. Accessed October 1, 2019. s10.io/servo.

52. “United Architects. World Trade Center Proposal, New York, New York, Study Model 9. 2002: MoMA.” The Museum of Modern Art. Accessed October 1, 2019. s10.io/momaua.

53. Kennon, Kevin. “Does Collaboration Work?” Architectural Design 76, no. 5 (2006): 50–53. s10.io/kennoncollab.

54. Bruijn, Willem de. “Writerly Experimentation in Architecture: The Laboratory (Not) as Metaphor.” TU Delft Library, 48–58. Accessed October 3, 2019. s10.io/archexp.

55. Ibid.

56. Steele, Brett. “The AADRL: Design, Collaboration and Convergence.” Architectural Design 76, no. 5 (2006): 58–63. s10.io/steele.

57. Weinstock, Michael. The Architecture of Emergence: the Evolution of Form in Nature and Civilisation. Great Britain: Open University, 2016.

58. Ednie-Brown, Pia and Alisa Andrasek. “CONTINUUM: A Self-Engineering Creature-Culture.” Architectural Design 76, no. 5 (2006): 58–63. s10.io/cntnm.

59. Ibid.

60. “Research Infrastructure.” Institute for Computational Design and Construction. Accessed October 1, 2019. s10.io/icdc.

61. Claypool, M, Jimenez Garcia, M, Retsin, G and Soler, V. Robotic Building: Architecture in the Age of Automation. Detail; verlag, 2019.

62. Schumacher, Patrik. “Parametricism as Style – Parametricist Manifesto.” Patrik Schumacher, 2008. s10.io/prmtrsmman.

63. Moore, Rowan. “Zaha Hadid’s Successor: My Blueprint for the Future.” The Guardian. Guardian News and Media, September 11, 2016. s10.io/schumacherblprnt.

64. Blahut, Chelsea. “Zaha Hadid Architects Releases Statement on Budget Increase and Cancellation for Tokyo’s New National Stadium.” Architect Magazine. July 28, 2015. s10.io/hadidbudget.

65. Moore. “Zaha Hadid’s Successor: My Blueprint for the Future.”

66. Moore, Rowan. “Zaha Hadid: A Visionary Whose Ideas Don’t Always Make Sense | Rowan Moore.” The Guardian. Guardian News and Media, September 26, 2015. s10.io/hadidvisionary.

67. Carpo, Mario. Digital Turn in Architecture 1992-2012: AD Reader. Somerset: Wiley, 2013.

68. Burry, Mark. “Aegis Hyposurface.” MARK BURRY, April 30, 2013. s10.io/burryaegis.

69. “Institut Du Monde Arabe (IMA).” Ateliers Jean Nouvel. Accessed October 2, 2019. s10.io/nouvelinstitut.

70. Philip Beesley Architect Inc. “Hylozoic Ground.” Philip Beesley Architect Inc. | Sculptures & Projects. Accessed October 2, 2019. s10.io/beesleyholozoic.

71. “The Official History of the RepRap Project.” All3DP, April 8, 2016. s10.io/reprap.

72. “Rapid Prototyping, Tooling and Manufacturing State of the Industry”, Wohlers Report 2005. Wohler Associates. Accessed October 2, 2019. s10.io/protoolman.

73. Swartz, Aaron. “Guerilla Open Access Manifesto.” archive.org, July 2008. s10.io/swartzmanif.

74. Kurman, Melba, and Hod Lipson. Fabricated: The New World of 3D Printing. Wiley, 2013.

75. Gershenfeld, Neil. “How to Make Almost Anything: The Digital Fabrication Revolution.” Foreign Affairs, 2012. s10.io/digfabrev.

76. “The Fab Foundation.” The Fab Foundation. Accessed October 2, 2019. s10.io/fabfound.

77. “WikiHouse.” WikiHouse. Accessed October 2, 2019. s10.io/wikihouse.

78. Ibid.

79. Cascone, Paolo. “African Fabbers Project.” CODESIGNLAB. Accessed October 2, 2019. s10.io/africanfabbers.

80. Peek, Nadya Meile. “Making machines that make : object-oriented hardware meets object-oriented software.” MIT, 2016. 17. s10.io/peekmachines.

81. Ibid.

82. Ibid, 2.

83. Karl Čapek, R.U.R. (Rossum’s Universal Robots), Translated by Paul Selver and Nigel Playfair. Prague, 1921. 4.

84. “Unimate – The First Industrial Robot.” Robotics Online. Accessed October 3, 2019. s10.io/unim8.

85. Ibid.

86. Cca. “Olzweg – R&Sie(n) Project Records.” CCA. Accessed October 2, 2019. s10.io/olzweg.

87. Brackets added.

88. “The Programmed Wall, ETH Zurich, 2006.” Gramazio Kohler Research. Accessed October 2, 2019. s10.io/prgrmdwall.

89. “SAM100.” Construction Robotics. Accessed October 2, 2019.

90. “BUGA Fibre Pavilion 2019.” Institute for Computational Design and Construction, 2019. s10.io/bugafibre.

91. Yablonina, Maria, and Achim Menges. “Distributed Fabrication: Cooperative Making with Larger Groups of Smaller Machines.” Architectural Design 89, no. 2 (2019): 62–69. s10.io/yabmeng.

92. Ibid.

93. Projects which engage with these ideas come from, for example, the Autonomous Manufacturing Lab, UCL as well as the Wyss Institute at Harvard University and the Institute for Dynamic Systems and Control with Gramazio Kohler Research at ETH Zurich.

94. “About.” Design Computation Lab. Accessed October 2, 2019. s10.io/dcl.

95. Carpo, Mario. “BREAKING THE CURVE: BIG DATA AND DESIGN.” Artforum, February 2014. s10.io/carpodata.

96. Carpo. The Second Digital Turn: Design Beyond Intelligence, MIT Press, 2017.

97. Retsin, Gilles, Philippe Morel, Daniel Koehler, Mollie Claypool, Achim Menges, Mario Carpo, Viola Ago, Marrikka Trotter, and Neil Leach. Discrete: Reappraising the Digital in Architecture. West Sussex, UK: John Wiley & Sons Ltd, 2019.

98. Retsin, Gilles. “Discrete Architecture in the Age of Automation.” In Discrete: Reappraising the Digital in Architecture, 7–8.

99. “Common’Hood.” Plethora Project. Accessed October 2, 2019. s10.io/cwm.

100. “Reinventing Construction: A Route to Higher Productivity”, McKinsey Global Institute, McKinsey & Company, February 2017.

101. Ibid.

102. Taylor, M., S. Wamuziri, and I. Smith. “Automated Construction in Japan.” Civil Engineering 156, no. 1 (2003): 34–41. s10.io/automjapan

103. Ibid.

104. “Building the Future of Construction.” Built Robotics. Accessed October 2, 2019. s10.io/bltrbtcs.

105. “X – Wing.” X, the moonshot factory. Accessed October 2, 2019. s10.io/wing.

106. “About – Katerra – North America.” Katerra. Accessed October 2, 2019. s10.io/ktra.

107. “World’s Leading Construction Management Software.” Procore. Accessed October 2, 2019. s10.io/prcre.

108. “Sidewalk Labs.” Sidewalk Labs. Accessed October 2, 2019. s10.io/swl.

109. Cecco, Leyland. “’Surveillance Capitalism’: Critic Urges Toronto to Abandon Smart City Project.” The Guardian. Guardian News and Media, June 6, 2019. s10.io/survcap.

110. Tierney, T F. “Toronto’s Smart City: Everyday Life or Google Life?” Architecture_MPS, January 2019. s10.io/trntsmart.