Advancements in Artificial Intelligence creativity should make us rethink the future of landscape architecture practice.
By Phillip Fernberg, Associate ASLA, and Brent Chamberlain
If you were to thumb through old issues of Science magazine, once you hit 1967 you would come across an obscure article coauthored by Allen Newell, an esteemed pioneer of artificial intelligence research, arguing for the validity of a new discipline called computer science. In the article, Newell and his colleagues Alan J. Perlis and Herbert A. Simon address some fundamental objections within academia to the idea that the study of computers was, in fact, a science or even a worthwhile pursuit. The questions are simple but fundamental: Is there such a thing as computer science? If so, what is it?
As you read the objections and their respective responses, you might begin to think as we did about the similar line of questioning that has been employed in landscape architecture. Substitute the computer speak with our own professional jargon and you have near carbon copies of themes from licensure advocacy meetings, ASLA conferences, or academic treatises on the state of the discipline. Computer science and landscape architecture have a surprising amount in common. They are both relatively new (at least in the official sense), they have both evolved in significant ways over the past century, and they both have been in an ongoing existential discussion about their position amid peer disciplines. This is nice to know but not revelatory.
Yet the intersection gets more interesting. One of the objections in the article states: “The term ‘computer’ is not well defined, and its meaning will change with new developments, hence computer science does not have a well-defined subject matter.” The authors’ reply is astute and resonant: “The phenomena of all sciences change over time; the process of understanding assures that this will be the case. Astronomy did not originally include the study of interstellar gases; physics did not include radioactivity; psychology did not include the study of animal behavior. Mathematics was once defined as the ‘science of quantity.’” So too is the phenomenon of landscape architecture; it just happens to work on an accelerated timeline. The field is ever shifting, retooling, and reassessing our place as our understanding of our medium and our instruments evolves. Before Olmsted, landscapes were gardens rather than systems; before Ian McHarg’s Design with Nature, those systems were not intertwined with ecology; before CAD, GIS, or Adobe, our only tools were pen and paper.
Landscape architecture is in one of those shifts now. Alongside the social and ecological issues changing paradigms of practice, the field is in the midst of another great technological leap. But it’s not happening in the way you might think. Most readers of design-centric news can probably guess the usual suspects of our perceived landscape-tech revolution. New computational approaches like BIM and parametricism are gaining traction, drones are becoming a go-to office tool for collecting site data, virtual and augmented reality walk-throughs are starting to show up on RFPs, and adoption of still-multiplying design software programs or plug-ins is rising precipitously. These innovations are all fascinating, but maybe not the new sliced bread for anyone who experienced the pivot to computer-aided design (CAD) or the algorithmic designs trickling over from architecture in the 1990s and early 2000s. To us, these are pieces of a larger technofuturist puzzle that weaves together computer science, psychology, and design—one driven by a more profound technology: artificial intelligence (AI).
It is no secret that AI has pervaded the AEC industry; landscape architecture is no exception. But what is perhaps less obvious, or at least underestimated, is the extent to which it has changed and will be changing the way we practice in the near future. Past discussions of AI in landscape architecture have mostly focused on AI-driven tools such as SWA’s Darkflow or the automated ecology of Bradley Cantrell, ASLA, and his colleagues. Even since LAM last published about these (see “Live and Learn,” LAM, February 2019), new advances in AI or AI-adjacent statistical learning applications have sprung up to assist with anything from site inventory, reuse of construction materials, and urban design to terrain modeling, planting design, autonomous construction, and measuring emotional responses to the built environment. This is a trend worth paying attention to. But we want to go a little beyond the tools and reflect on how the coming ubiquity of intelligent systems in the profession necessitates a rethinking of the creative acts we hold so dear as designers and even the idea of creativity itself.
One of the things that makes the idea of artificial intelligence so elusive is the way we constantly shift the baseline for what counts as “thinking” or “intelligence.” As the AI historian Pamela McCorduck states in her book Machines Who Think, “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘That’s not thinking.’” Such reactions, she writes, created an odd paradox where “computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches,” and promptly considered mere computation rather than intelligence. The esteemed computer scientist Larry Tesler had a great aphorism for this AI effect: “Intelligence is whatever machines haven’t done yet.” Something a machine (or even an animal) can conceivably do can’t possibly be intelligent behavior because human intelligence is unique and too complex to be replicated. Perhaps the most famous example of the AI effect in recent memory is the 1997 chess match where IBM’s supercomputer Deep Blue beat the world champion Garry Kasparov and was almost immediately accused of cheating on the part of the developers, then delegitimized for using brute force (pure computational power) rather than real intelligence, then dismissed as sheer luck.
There are a number of reasonable explanations for why we constantly move the goalposts for intelligent behavior. One is that we haven’t totally decided on a definition of human intelligence and that such advances in computation help us refine that definition. After all, the idea that Deep Blue beat Kasparov by brute force is not wrong. Chess is a game of strategy based off a vast but finite number of possible paths to victory with a clear set of rules easily taught to a computer. Deep Blue does not necessarily need critical thinking to win a game, just the ability to see all possible moves and pick the most appropriate one faster than its opponent can counter. Thus, it stands to reason we learned from the match that intelligence is more complex and creative than computing combinations and decisions.
Another more compelling (and possibly more accurate) explanation is one posited by the professor and researcher Michael Kearns that “people subconsciously are trying to preserve for themselves some special role in the universe,” and by undermining machine intelligence they can continue to feel unique and special. This tendency is observed in anything from academic theology (think “God of the gaps”) to cinematic sci-fi (think Blade Runner or Ex Machina) and is especially strong in design discourse, where the stars among us sometimes hold an obfuscated or inexplicable caricature of “design process” as sacrosanct. Take the example of drawing. When CAD and digital photo collage were first commercialized, it sparked what has since become a rather tired debate over the value of analog versus digital drawing methods. Regardless of the specific tool, be it pen and paper or stylus on a screen, drawing is and will remain important in landscape architecture. As a matter of identity, we continue to espouse the proverbial napkin sketch as the je ne sais quoi that makes a designer. An ineffable “trademark of creativity,” as the architectural historian Winfried Nerdinger once put it. Yet with our modern technology, the napkin is not representing the importance of the design artifact; rather it is a symbol of how we define ourselves as creative agents.
While it has been anecdotally understood for years, recent research in design psychology has given credence to drawing as a spontaneous encapsulation of inspiration and, more importantly, a deliberate act of problem-solving endemic to design. One study by the design theory researcher Sabine Ammon suggests that exploratory sketching can allow the designer to “literally draw inferences” through an iterative process of drawing lines, adding and generating variations, and testing assumptions. This is what the psychologist Robert Sternberg calls experiential intelligence (now often called creative intelligence). It is that form of intelligence that focuses on the capacity to be intellectually flexible and innovative. As we consider a future of artificial intelligence within the discipline, should we assume that this act can never be fully replicated or imitated by a machine? Or, on the other hand, can a machine learn a designer’s creative intelligence through observation and analysis?
One of the most significant hurdles facing AI is the development of intelligence that can both perform tasks and interpret their context—algorithms are quite good at the former but far from significant advancement in the latter. There are a few ways of doing this. One is to model the mind, assigning symbols to the processes of reasoning and problem solving we employ in our everyday lives. Another approach is to model the brain by simulating its architecture and nervous system. “The problem with this approach,” writes the AI researcher Michael Wooldridge, “is that the brain is an unimaginably complex organ…and we don’t remotely understand the structure and operation of these well enough to duplicate it.” The best we can do in the meantime, he suggests, “is to take inspiration from some structures that occur in the brain and model these as components in intelligent systems.” This is the basis for the now immensely popular AI approach called machine learning. While we have barely scratched the surface of understanding the human brain, methods like machine learning have still brought astounding technological advancements in a relatively small span of time. The past 10 years alone have seen an explosion of applications such as virtual assistants, the recommender systems that suggested your latest Netflix binge, and yes, even creative acts endemic to art and design.
Numerous examples of AI creativity have been developed, from producing art to jamming with jazz musicians, models that can talk to and learn from spiders, and even restyling site plan graphics. Much like a human, a machine can be “taught” to learn from past works or examples and develop a passable form of agency or self-learning that allows it to create novel outcomes a human couldn’t before dream of. The digital advertising campaign “The Next Rembrandt,” for example, built an AI to analyze 3-D scans of every single piece of work by the centuries-dead artist and produce an original painting in the way he conceivably would. The software mimicked the geometry, body position, lighting conditions, even the intricacies of brushstroke style Rembrandt could have employed to create a work so evocative it could fool many into thinking it original. Its purpose, however, was not to create the world’s most convincing fake but to call into question the notions of inimitable, personal creativity the advertising and art worlds held so dearly. It certainly stirred the pot in this regard. The art critic Jonathan Jones called the project “a new way to mock art, made by fools,” saying a digital fake would always lack the emotional heft that gave inspiration to a human original.
Now consider how the same conflict of creative identity might play out in landscape architecture. If an AI were able to produce such stunning results with something so elusive as a famous painter’s brush strokes, could it not do something similar with the designs and sketching of Farrand, Parpagliolo, or Burle Marx? Would we balk at the plans because they lack the life experiences of the designers being poured into each line? Probably. But it would be beside the point. As already noted, the argument in the AI world has never really been about whether machines can exist and creatively produce as intelligent humans to supplant them (we can thank dystopian media and an unfortunate PR move by AI founders for that misconception) but rather where the limits of human intelligence can be revealed and its creative outcomes improved by a partnership with AI systems. This kind of research is being done as we write, and it holds a lot of promise.
So then, this nebulous pervasion of AI into the palms of our hands and the portraits we admire raises the question: How will landscape architects define the future of practice in the age of AI? Will we resist it, cherry-pick its tools in whatever way is convenient, or will we claim our creative agency within the system by working with AI to bring clarity to our value? We often think of machines as using explicit definitions of terms to find an optimal solution to a problem. But landscape architecture is not an optimally focused discipline, as the broader context of our practice lies within wicked problems of our day. As the Landscape Architecture Foundation’s New Landscape Declaration states, “Landscape architects are uniquely positioned to bring related professions together into new alliances to address complex social and ecological problems.” We bring disparate interests together to “give artistic physical form and integrated function to the ideals of equity, sustainability, resiliency, and democracy.”
Maybe AI can help us with that, and in ways we couldn’t otherwise think of—solving climate change, social equity, and good design is a tall order, and our brains can only cross-analyze so much at a time. We landscape architects are systems thinkers by nature, yet we haven’t really taken the time to think about how our practice fits into a greater technological system being organized beyond our scope. A new generation of well-intentioned techies fancies itself placemakers. They have all the tools needed to autonomously produce landscapes fit for the 21st century, but none of the experiential intelligence of the landscape design process. Do we let those technology designers take the reins because we haven’t thought of ourselves in their terms, or do we work with and educate them because we have?
Phillip Fernberg, Associate ASLA, is a freelance designer, researcher, and PhD student at Utah State University.