Material(ism) for Architects: a Conversation with Manuel DeLandaFriday, October 8, 2010 10:05
Interview by Corrado Curti
If architecture – as Lebbeus Woods says – is about building ideas, then we may easily consider the philosopher, artist and writer Manuel DeLanda one of the most influential and active archistars there is.
Although architecture is not the direct object of DeLanda’s speculations, his ideas and writings provide architectural thinking with valuable insight on the methods and models of scientific discourse, which is critical to develop a coherent experimental practice.
Manuel DeLanda lecturing
CC: What role do you believe materialist philosophy can have in relation to contemporary scientific research and, in general, to research as the activity of exploring original paths of thought in any given field of knowledge? Or rather, do you envisage a way for philosophy to actively engage and affect everyday life and society?
MD: Philosophy can help science work out its own metaphysics, or its ontology. For example, as materialists can we really believe that there are “immutable and eternal laws of nature” or is this a transcendent (and illegitimate) concept borrowed from religion? (All early scientists were deeply Christian). And if it is indeed a theological fossil, how do we get rid of it while preserving their objective content? In my work I have tried to do this by shifting the focus from the equations themselves (e.g. the equations of Newtonian mechanics) to the topological invariants of those equations (number of dimensions, distribution of singularities) and then argue that unlike “laws” these invariants are immanent (not transcendent), and hence valid entities in a materialist metaphysics. For designers (like architects) this has the consequence that the materials they engage with can be seen in a new light: instead of a matter that obeys laws, and obedient stuff that is shaped by godly commands (“let there be light”), a matter inhabited by immanent singularities is active and morphogenetic.
Like the soap film that Frei Otto used to generate the hyperbolic paraboloids he needed for his tent-like roofs at the Olympics pavilion he designed: the singularity here is a minimum of surface tension that gives the soap film the active tendency to wrap itself into minimal surfaces.
In the transcendent picture form comes from the outside (god’s mind) and is imposed on an inert material; in the immanent one, a human designer does not impose a form but teases it out of a morphogenetically pregnant material: humans and materials form a partnership in the production of form.
to a materialist a typology can become an obstacle to think about the processes that produced the items it classifies and it can hide the sources of variation that give the world its expressivity.
CC: The mutual borrowing of concepts between philosophy and architecture goes back along way, but at large this borrowing has occurred on a purely metaphorical level. In your contributions to architectural thinking you suggest the possibility of applying philosophical concepts to architecture as a basis for new methods of experimentation, I’m referring in particular to the application of genetic algorithms¹ and the introduction of three crucial concepts: population, intensive and topological thinking. It seems to me that while the use of genetic algorithms as design devices calls for a radically new design methodology and role for the designer, it doesn’t however challenge the result of the process – that is a building whether virtual or real – to the same degree, because it doesn’t deploy the inherent interactive qualities of buildings i.e. allow inhabitants for certain uses, lend themselves to transformations, intensify or block flows of matter (including people) and energy. Basically, I think that the space investigated by the architectural searching device should not be limited to a building’s properties, but to its capacities as well, allowing us to explore new ways of inhabiting buildings and as well as creating new models. Does this hypothesis make sense to you? If so, what set of conceptual tools would such an exploration require?
MD: I think you are correct that every application of simulated evolution (whether to evolve music, paintings, or buildings) must bring extra resources to supplement the evolutionary software itself. In particular, using genetic algorithms in architecture, or any other field of art or science, demands that we give an explicit definition of fitness, in addition to specifying the genotype (sequences of CAD instructions, for example) and the phenotype (a 3D computer model of a building.) Every generation, as the population of CAD sequences is ready to reproduce, the definition of fitness guides the evaluation of the chances each sequence has of making multiple copies of itself.
image: left – detail of a pole used to support a vine, in Arborea, Bussoleno, TO; right – detail of the imperial residence of Katsura by A. Bocco, G.C. Cavaglià, ‘Flessibile come di pietra” CELID, Turin, 2008, images of the archive “Eco – Disegno Fotografie Parole dell’ambiente costruito”, DICAS, Politechnic of Turin
In some applications, like the design of analog electronic circuits, the fitness evaluation is done by another program: a standard piece of software that tests circuits for validity. In architecture we would need at least three such external evaluators of fitness: 1) A standard package from structural engineering that performs finite-element analysis, calculates stress distributions, and checks for structural integrity. 2) A program to check for the satisfaction of the aesthetic values of the designer (NOT a program capable of making aesthetic judgements in general, whatever that may be) perhaps implemented as a neural net trained to recognize patterns and designs that the designer already likes. 3) a multi-agent simulation in which simple AI agents are given the capacity to move around a 3D model, treating the walls, doors, staircases, and hallways, as surface layouts that afford the agents opportunities and risks for action. As you said, the 3D model should be treated not only as possessing properties, (areas, heights, geometric shape) but also capacities to affect and be affected by the agents, that is, affordances.
The collectivity of agents would then be unleashed in the 3D model and a program would check whether desired patterns of traffic (free flow, congestion) do indeed form, or whether certain patterns of usage (as shelter, as place to gather) do emerge. The results of all three evaluations would then be used to generate the final fitness score for each sequence of CAD instructions, once every generation.
CC: Thinking in terms of population as opposed to typologies I would like to cast your attention to traditional, pre-industrial buildings. The recurrent traits show variations, which have been commonly interpreted as variations of typological entities, but that appear to me more like the result of an historical process, where the set of possible solutions was highly limited by the scarcity of labour-intense processes from production to construction. Innovations in building materials, techniques and processes have evolved in the last 300 years, but architecture is still trapped in the typological loop, repeating obsolete schemes and solutions or breaking them exclusively through formal expressions of “novelty”. In this sense architectural research is completely failing to learn from the complexity that made traditional stable solutions emerge out of an historical process under rather homogeneous constraints, and is therefore incapable to let similarly stable and efficient solutions emerge out of the new, heterogeneous constraints posed by contemporary conditions. Why is typological thinking wrong from a materialist philosophy point of view?
In your writings for Domus² magazine you traced several trajectories for new ways to conceive buildings and architecture, including biomimetics, statistical thinking, learning to gain from randomness and imperfections. How can these and/or other tools help researchers get over typological thinking and start exploring the new spaces defined by innovative building materials, techniques and processes?
MD: I have little to say about architectural history, or about the use of static typologies that inhibit innovative design, but let me answer the question with an example from chemistry. There is nothing inherently wrong with typological thinking: every field of science has had to start by first classifying the items in the domain under study.
just like math (differential and topological geometries) were indispensable in breaking the hold that Aristotelian logic had on our thought, so math in action (that is, computer simulations in which equations are animated through recursion) can provide further tools to achieve this break
The problem is to take the results of such classifications and reify them as a set of eternal archetypes. Thus, there is a beauty to the rhythms of the Periodic Table of the chemical elements, and the classification should be considered a triumph for science. But if we then reify its contents and start believing in the existence of “Hydrogen” (with a capital H, that is, “hydrogen in general”) or Oxygen, then we are introducing transcendent entities into our ontology: Aristotelian essences. (The essence of Hydrogen is to have a single proton in its nucleus). Instead, a materialist would insist, that: 1) there is no such thing as Hydrogen only a very large population of individual hydrogen atoms each if which was born within a star (or other powerful cosmic event) through a process of nucleosynthesis. 2) while it is true that the identity of a hydrogen atom is changed if we add one more proton to it (it becomes a helium atom), we must consider, in addition to what all hydrogen atoms have in common, the sources of variation. In this case the variation comes from neutrons: possessing one of these yields a version of hydrogen called deuterium”, while possession of two yields “tritium”.
These variants, or isotopes, play a very important role in chemistry, but we would miss this if we focus on Hydrogen. Thus, to a materialist a typology can become an obstacle to think about the processes that produced the items it classifies and it can hide the sources of variation that give the world its expressivity.
CC: I think that the difference between topological and typological thinking can be roughly summarized as the difference of thinking in terms of entities (typology) and thinking in terms of relations that define a space of possible entities (topology), like in the example you have used of the soap bubble: where typologists see bubbles of different shapes and sizes, topologists see a relation between soap molecules minimising surface tension, and the same rule describes a space of potential entities that includes many different forms beside bubbles, like crystals. Generally speaking, I believe there is plenty of work to be done when thinking of buildings as the actualisation of virtual forms based on relations rather than shapes, and of architecture as the discipline that investigates these relations instead of their actualisations. Is there something that could be appropriated from typological analysis to help define the mode in which topological thinking should be developed, or should such a process start from zero, borrowing instruments and concepts from other fields of knowledge?
“Soap Bubbles” by Jean Siméon Chardin (French, 1699–1779), via www.metmuseum.org/
MD: Given the materialist emphasis on production processes, topological thinking is useful to explain the regularities in these processes. Thus, the simplest morphogenetic process is one that optimizes (minimizes or maximizes) some property: surface tension in the case of soap bubbles or bonding energy in the case of crystals. So, it is not strictly correct to say that the difference is between entities and relations. It is rather between the general and the particular, on one hand, and the individual singular and the universal singular, on the other.
Thinking in terms of general categories, and particular instances of those categories, has been the dominant way of thinking from Aristotle to Kant (and beyond). So in a new materialist ontology we must replace all particular instances by singular (historically unique, non-reproducible) individuals: individual atoms and molecules; individual organisms and ecosystems; individual communities, organizations and cities, and so on. Then, the regularities that general categories capture must be explained or accounted for by the singularities that structure the space of possible outcomes of a production process. Because these singularities are shared by many different entities (e.g. the single minimum shared by bubbles and crystals) these are universal singularities.
I just finished a new book called Philosophy and Simulation: the Emergence of Synthetic Reason. There I try to extend the concept of “structure of a possibility space” from the well known cases (phase space and its singular points and loops) to several other cases: the space of possible genes and proteins, for example. I argue that just like math (differential and topological geometries) were indispensable in breaking the hold that Aristotelian logic had on our thought, so math in action (that is, computer simulations in which equations are animated through recursion) can provide further tools to achieve this break.
CC: Another crucial concept that you have introduced in your writings is the difference between extensive and intensive borderlines³, and the relation between extensive borderlines and intensity gradients. Architecture is essentially about framing spaces therefore it mostly deals with borderlines as well. It is hardly surprising then, that in socio-political conditions of deep friction architectural instruments can be detourned and applied as border instruments, walls being the easiest example (former Berlin, Israeli, Ceuta, California/Mexico, Cyprus…). Drawing an analogy with thermodynamics, it could be affirmed that when intensities in socio-political conflict reach a critical point extensive borderlines made out of architectural elements occur. Experimental architecture has a long history of works and projects on these extreme conditions, most of which are based on the assumption that changing the nature of the border can affect the two confronting systems in an amplified manner. In your opinion does this analogy belong on a metaphorical level, or does it rely on a shared “engineering working diagram”?
Do you consider intervening on a border to cause an amplified reaction in the system a viable strategy, or a weak approach derived from a strictly disciplinary viewpoint?
MD: The first thing we need to do to apply the idea of an intensive gradient to social processes is to indicate in which way this would not be a metaphor.
Temperature, pressure, speed, density, and other intensive properties have the characteristic that a gradient (that is, a difference hot/cold, high/low, slow/fast) can drive a process. What properties are like that in social processes? In the case of borders or frontiers, a wage differential is one of the main drivers of migratory flows (from low to high wage countries) and of industrial investment from high to low wage countries). So in this case we do have an interaction between intensive gradients (wage differentials) and extensive borderlines (frontiers). There are other socio-intensive properties (gradients of solidarity, of legitimacy, of status or prestige) that help drive communal or organizational processes, as well as to define the defining borders of communities and organizations.
But we know very little about social topology, so it is hard to say whether the structure of the spatial possibility of processes driven by these gradients can be studied by analogy with, say, phase space, or we need to use multi-agent simulations to tease out the singularities. I tackle this issue in my new book.
CC: To conclude I would like you to consider the study model by Lebbeus Woods for the installation called The Fall4, that was held at the Fondation Cartier in Paris in 2002:
image: The Fall, 2002: view through trajectory field (model) Concept and design: Lebbeus Woods with the collaboration of Alexis Rochas, New York © 2002 by Lebbeus Woods
What I find intriguing here is that it resembles a deformation of a uniform field of structural elements under the effects of load distributions and space-carving operations: the traditional scheme of highly concentrated and optimized load carrying elements would be substituted by a field of redundant elements, each with a limited carrying capacity, while the load-bearing capacity of the system would emerge as a property of the whole. Coupling the map of densities of the elements with the probabilistic load distributions would result in an optimization of densities instead of elements. Given the limited carrying capacity of the single element, such a system is likely to show non-linear behaviour and adaptability, rather than simply resist, to unexpected load distributions. As a consequence, the architectural design process would transform from shape-definition to space-carving within the structural field and would result in a complex interaction with the overall structural behaviour and densities distribution. This reading of The Fall exemplifies how concepts like emergence and statistical thinking can affect and radically transform architecture, provided that architects – much like you wrote about philosophers – stop worrying about losing their centrality in the design process.
The given example embodies how structural intensities are core to a conceptual transformation in the design process. From your point of view what other intensive zones should architectural thinking start to explore to benefit from the potentialities inherent in the conceptual tools brought in by materialist philosophy?
MD: I never saw The Fall so I can’t comment on it. But I saw the installation The Storm 5 at Cooper Union and it was a true piece of intensive art: nothing but gradients and forces in interaction.
image: The Storm installation, New York (2001) work of Lebbeus Woods; images fromhttp://lebbeuswoods.wordpress.com
It produced a beautiful aesthetic effect the moment you walked into the room. Also, the “projections” of these interacting forces that Lebb drew on the walls where the cables were attached seemed like diagrams of the pieces, pictorial renderings that made visible what was invisible.
MANUEL DELANDA, (born 1952 in Mexico City) has lived in New York since 1975. He is the Gilles Deleuze Chair of Contemporary Philosophy and Science at the European Graduate School in Saas-Fee, Switzerland, a lecturer at the Canisius College in Buffalo, New York, lecturer at the University of Pennsylvania School of Design in Philadelphia, Pennsylvania, and adjunct professor at Pratt Institute the School of Architecture in Brooklyn, New York.
He is the author of War in the Age of Intelligent Machines (1991), A Thousand Years of Nonlinear History (1997), Intensive Science and Virtual Philosophy (2002) and A New Philosophy of Society: Assemblage Theory and Social Complexity (2006). He has published many articles and essays and lectured extensively in Europe and in the United States. His work focuses on the theories of the French philosopher Gilles Deleuze on one hand, and modern science, self-organizing matter, artificial life and intelligence, economics, architecture, chaos theory, history of science, nonlinear dynamics, cellular automata on the other. De Landa became a principal figure in the “new materialism” based on his application of Deleuze’s realist ontology. His universal research into “morphogenesis” – the production of the semi-stable structures out of material flows that are constitutive of the natural and social world – has been of interest to theorists across many academic and professional disciplines.
¹ M. De Landa, “Deleuze and the use of the genetic algorithm in architecture”, in A. Rahim (ed) “Contemporary Techniques in Architecture”, (Architectural Design), Jan 2002
²M. De Landa, “Matter Matters – a series from Domus Magazine”, published in Domus Magazine from n. 884 to 901 between 2005 and 2007:
M. DeLanda, ‘Building with Bone and Muscle’in Domus, No 884, September 2005, pp. 208-09 http://lebbeuswoods.wordpress.com/2008/12/03/manuel-delanda-matters-1/
M. DeLanda, ‘One Dimension Lower’in Domus, No 886, November 2005, pp. 136-37 http://lebbeuswoods.wordpress.com/2008/12/23/manuel-delanda-matters-3/
M. DeLanda, ‘The Importance of Imperfections’in Domus, No 888, January 2006, pp. 136-37
M. DeLanda, ‘Events Producing Events’in Domus, No 889, February 2006, pp. 100-01 http://lebbeuswoods.wordpress.com/2008/12/13/manuel-delanda-matters-2/
M. DeLanda, ‘Evolvable Materials’in Domus, No 890, March 2006, pp. 164-65
M. DeLanda, ‘Extensive and Intensive’in Domus, No 892, May 2006, pp. 152-53
M. DeLanda, ‘Material Expressivity’in Domus, No 893, June 2006, pp. 122-23
M. DeLanda, ‘Smart Materials’in Domus, No 894, July 2006, pp. 122-23
M. DeLanda, ‘Crucial Eccentricities’in Domus, No 895, September 2006, pp. 262-63
M. DeLanda, ‘Matter Singing in Unison’in Domus, No 896, October 2006, pp. 286-87
M. DeLanda, ‘High Intensity Environments’in Domus, No 897, November 2006, pp. 148-49
M. DeLanda, ‘The Foam and the Sponge’in Domus, No 899, January 2007, pp. 140-41
M. DeLanda, ‘Opportunities and Risks’in Domus, No 901, March 2007, pp. 192-93 http://lebbeuswoods.wordpress.com/2009/01/30/manuel-delanda-opportunities-and-risks/
³M. DeLanda, ‘Extensive and Intensive’in Domus, No 892, May 2006, pp. 152-53
4Virilio P. (ed.) “Unknown Quantity”, Thames & Hudson, New York, Fondation Cartier pour l’art contemporaine, Paris 2002, p. 150-163
5L. Woods, “The Storm and The Fall,” Princeton Architectural Press, New York, 2004