Algorithms are bringing new kinds of evidence and predictive powers to the shaping of landscapes.
By Mimi Zeiger
Tree. Person. Bike. Person. Person. Tree. Anya Domlesky, ASLA, an associate at SWA in Sausalito, California, rattles off how she and the firm’s innovation lab team train a computer to recognize the flora and fauna in an urban plaza.
The effort is part of the firm’s mission to apply emergent technologies to landscape architecture. In pursuing the applied use of artificial intelligence (AI) and machine learning, the research and innovation lab XL: Experiments in Landscape and Urbanism follows a small but growing number of researchers and practitioners interested in the ways the enigmatic yet ubiquitous culture of algorithms might be deployed in the field.
Examples of AI and machine learning are all around us, from the voice recognition software in your iPhone to the predictive software that drives recommendations for Netflix binges. While the financial and health care industries have quickly adopted AI, and use in construction and agriculture is steadily growing, conversations within landscape architecture as to how such tools translate to the design, management, and conservation of landscapes are still on the periphery for the field. This marginality may be because despite their everyday use, mainstream understandings of AI are clouded by clichés—think self-actualized computers or anthropomorphic robots. In a recent essay on Medium, Molly Wright Steenson, the author of Architectural Intelligence: How Designers and Architects Created the Digital Landscape (The MIT Press, 2017), argued that we need new clichés. “Our pop culture visions of AI are not helping us. In fact, they’re hurting us. They’re decades out of date,” she writes. “[W]e keep using the old clichés in order to talk about emerging technologies today. They make it harder for us to understand AI—what it is, what it isn’t, and what impact it will have on our lives.”
So then, what is a new vision—a vision of AI for landscape?
At a layperson’s level, what we consider intelligent are tools, devices, or entities that use suites of algorithmic code to process information. What we might, as humans, describe as “thinking” rapidly takes place in black boxes. Trained on “learning sets” of information, AI tools are taught to identify specific inputs. As such, these tools are incredibly good at sorting data. “The code can recognize a lot of chaos at eye level and quickly detect what is human and nonhuman,” Domlesky explains.
Domlesky and her lab partner, Emily Schlickman, ASLA, used the algorithmic software Darkflow, a real-time object detection system, to revisit the findings of William H. Whyte’s Street Life Project and his 1980 study The Social Life of Small Urban Spaces. Whyte’s research relied on direct observation and time-lapse photography to document how New Yorkers occupied public space. Domlesky and Schlickman’s project is part postoccupancy study and part historical research. When SWA and Thomas Balsley Associates merged in 2016, SWA inherited Balsley’s back catalog of corporate bonus plazas and other parks in and around New York City. (Balsley was an editorial adviser on this project.) Domlesky says the team is also interested in “infrastructural leftovers,” alleys, and “tactical urbanist interventions.”
By teaching a machine-learning algorithm how to identify a tree, a person, a bike, etc., the SWA team was able to let Darkflow do the sorting of where, when, and for how long people were in a series of Manhattan plazas constructed or renovated in the past 15 years. The team filmed these places over the course of a week, and then, working with a data scientist, fed the footage to the algorithm. The software appears to read the video by drawing a colorful bounding box around objects (such as people) and assigning identifying count numbers and categories. This data can then be processed into heat maps that indicate dwell time or pedestrian traffic. SWA’s findings weren’t necessarily radical reconsiderations of Whyte, but they do provide solid metrics to back up a changing set of assumptions of how we occupy public space.
“We found that some of Whyte’s findings are simply not true anymore,” Domlesky says. “[His] study values the idea of street theater—men watching women. Our analysis shows a huge surge of devices. People are in public space to be around other people, but not watching other people. The idea of street theater is less important. This kind of information allows us to reevaluate the dominant forms of new urban space.”
The XL research and innovation lab, along with Penn State University and using funding from the Landscape Architecture Foundation, used AI tools paired with video to conduct a postoccupancy study of Hunter’s Point South waterfront park. The first phase of the park withstood Hurricane Sandy, and the new research focuses on both user occupation and coastal resilience. As with the plaza study, small segments of video footage were processed using Darkflow to detect objects. SWA’s data scientist modified the machine learning algorithm, adding more than 1,000 lines of new code to enable tracking of objects across the site, counting, and ultimately creating a heat map to show a gradation of user locations. Additionally, they used the Python programming language to define the output. For XL’s research on coastal resilience and SWA’s waterfront projects, the team uses Aquaveo, a hydrodynamic modeling software, to model Hunter’s Point South to simulate flood dynamics.
Domlesky sees AI as a way to digest information—to crunch data. Noting that at a time in the profession when clients crave quantifiable answers beyond projective renderings (SWA works with many public sector and health care clients that are, in her words, “metrics sensitive”), the analytical tools provide hard evidence. There’s also a potential shift in design authorship at play. “Design is so personality-focused and about what the genius designer thinks and feels,” she says. “But anecdotes have their limitations. We see machine learning as an alternative model, not that machines should take over.”
But what if machines do take over? In 2016, Bradley Cantrell, ASLA; Laura J. Martin; and Erle C. Ellis published the paper “Designing Autonomy: Opportunities for New Wildness in the Anthropocene” in the journal Trends in Ecology & Evolution. The group—a landscape architect, a historian and ecologist, and an environmental scientist—speculated on the ways intelligent systems might be used in the management of protected wilderness areas. They suggest that through responsive technologies, robotics, and AI, a landscape could, over time, learn to conserve itself and ultimately be managed without human input. The paper also included a proposal for a wildness creator, a conceptual design for an autonomous entity that would learn from its own development, taking in data from a surrounding context—pollutants, noise, human occupation—and implementing the needed protocols for the continued environmental success of the ecosystem. (For more on this research, see “Ecology on Autopilot,” LAM, June 2017.)
Similar to Domlesky’s observation that there are now alternatives to the authorial hand of the designer, design autonomy envisions an exciting future in which the natural environment, when paired with the right AI tools, would author a landscape free from the methods and inherent cultural biases that come with human-centered design. As the trio writes, “In time, the operations of the wildness creator would become unrecognizable and incomprehensible to human beings, the resulting ecological patterns and processes would diverge from any previously created and sustained by humans, and nonhuman species and environmental processes at the site would be able to go about life without experiencing human influence.”
Yet it would be wishful thinking to believe that AI can provide the hands-off neutrality so craved. Applications of artificial intelligence in policing and human resources have been problematic. Recently Amazon scrapped the AI tool it had developed to sift through job candidates because it showed bias against women. Trained on résumés submitted to the company over a decade, the computer model favored patterns shown in male applications, thus mirroring tech’s existing gender imbalance. And in May 2016, Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner of ProPublica published Machine Bias, a report that showed software developed to predict future criminals systematically considered black defendants more likely to be at risk of committing future crimes than white defendants. As landscape practitioners adopt machine learning, these examples serve as cautionary tales about the dark side of getting carried away by the thrall of techno-utopia. They also ask us to look more closely at the seemingly neutral assumptions embedded in the pedagogy of how AI “learns.”
Still, Cantrell, currently the chair of landscape architecture at the School of Architecture at the University of Virginia, sees these seemingly new technologies as part of a slow evolution of landscape architecture’s tool kit. Because machine learning has the power to recognize patterns and processes in landscapes over time, it could bridge between the field’s bifurcated history that places formal design (gardens, plazas, and other spaces for humans) on one side and large-scale regional planning on the other. AI and machine learning extend from and change modes of analysis just as Ian McHarg’s overlays, and then geographic information systems, ushered in new, comprehensive ways of understanding ecology. He gives land use classification as an applied example of AI’s strengths. Rather than having researchers visually scour aerial photographs, pixel data from images is fed into the algorithm. “The relationships that the computer makes classify land in ways that we wouldn’t normally see it: Patterns in ecology or river systems might be registered as working on similar time scales as productive agriculture,” Cantrell explains over the phone, adding that machine learning undoes the binary between natural and constructed, opening more complex relational patterns between the two.
Machine learning entities “can act in the world as mediators,” Cantrell says, adding that humans are not in opposition to the natural world, but intertwined with it, so our technology needs to reflect that linkage.
In his words are echoes of the philosopher Bruno Latour’s 2004 Politics of Nature, which includes a critique of political ecology among its arguments. In the introduction to the work, Latour writes: “Far from ‘getting beyond’ the dichotomies of man and nature, subject and object, modes of production, and the environment, in order to find remedies for the crisis as quickly as possible, what political ecologists should have done was slow down the movement, take their time, then burrow down beneath the dichotomies like the proverbial old mole.”
For the scientist David J. Klein, the chief AI developer for Conservation Metrics (a company that provides the measuring tools and data for wildlife conservation and management) and Cantrell’s occasional collaborator and colleague, it is precisely where the built environment intersects, abuts, or intrudes upon wilderness areas that AI tools might have important impact—where they might burrow down deep.
Early in his career, Klein helped develop the auditory-inspired algorithms that led to the kinds of sensors in your iPhone that allow Siri to “hear” you ask for directions or call a contact. Conservation Metrics employs that technology to make field recordings in wilderness environments. Trained to differentiate among sounds such as trees rustling, bird calls from a variety of species, or human intervention such as noise pollution, the AI can also generate a temporal-spatial analysis that maps where and when the sounds occur.
“Data might reveal species behavior that wasn’t captured before,” Klein explains. “A heat map can show bird call activities as a function of the time of day and year, differentiated across types of behavior, such as chicks hatching or birds mating.” Data-driven conservation work is also seen in the open-source efforts of projects like Global Forest Watch, an online platform that uses multiple data sets and analysis to monitor deforestation and illegal clearing activity in real time, or Global Fishing Watch, which tracks commercial fishing activities in the world’s oceans.
Klein notes that much of Conservation Metrics’s revenue comes from monitoring the relationship between wildlife and the built environment. He points to the example of the Tristram’s storm petrel, an endangered bird that lives on Tern Island in the Hawaiian Islands National Wildlife Refuge. Working with the U.S. Fish and Wildlife Service, Papahānaumokuākea Marine National Monument, and Hawai’i Pacific University, Conservation Metrics used acoustic surveys and machine learning tools to monitor the species. “[The storm petrel] had been known for years to be colliding with power lines in a rugged jungle terrain,” Klein says. “The species would go out on long feeding runs and come back at night and hit the lines. The utility company has to pay reparations for each endangered bird killed.”
The company’s sensors could detect the sounds of birds hitting the power lines, tracking data for how many and where the collisions were happening. Klein says the data revealed that the number was exponentially worse than the study team had thought. However, the team found that 5 percent of the span of power lines represented 95 percent of the collisions, accounting for covariates such as moon phase, terrain slope, and electrical tower height. The analysis pointed to places where human intervention—power lines—could be moved underground in deadly areas.
“Because we’ve put the data into a model, it gives us an idea of how to plan for future development…adjustments to the built environment, modularity of design, or lighting, and the anticipation of possible need for future modifications,” he explains.
Both SWA’s and Conservation Metrics’s projects suggest machine learning as a problem-solving tool skilled at pattern recognition in visual or auditory data. Its application generally involves existing site conditions and analysis. But can these computational entities be generative to the landscape design process?
Robert Gerard Pietrusko, an associate professor of landscape architecture at Harvard University’s Graduate School of Design, believes design is possible, but requires a slight rethinking of how AI has been defined thus far. We need to break into AI’s proverbial “black box.” Because we tend to anthropomorphize AI, we imagine that neural networks “think” and “learn” as they invisibly and inscrutably crunch code. But what if we were able to see and better control what is in the box?
Instead of the black box, Pietrusko prefers an agent-based model (ABM) to test a variety of situations within the design process. ABM simulates conditions based on the behavior of individual “agents” subject to particular boundaries or parameters. “These tools allow us to prototype and speculate on future urban conditions and landscapes,” he explains. “Their potential is in dealing with ecological complexity. Input might take the form of a community of species and the type of land management; then you might be able to predict how something performs over time.”
A computational modeling system similar to AI, ABM allows for more transparency and flexibility within the computational process. Whereas the black box offers surprise, ABM clears the fog of mystery. Designers can go back and track which parameters and decisions led to which outcomes. “When something seems absurd and interesting, but useful, you want to be able to drill down and find out what happened.”
Pietrusko works with students to develop cartographic representations that simulate urban and ecological processes (generally animations rather than fixed maps). The goal is to prototype, speculate on, and predict future urban conditions or landscapes over time. Design agents allow for the testing of multiple scenarios using simple rule sets in dialogue with each other, such as the grade of a slope combined with percentage of land cover, planting strategy, or type of species. The results might look complex, but they are representations of a system’s intrinsic behavior and limited parameters.
“When you throw complexity at something and it results in complexity, no one is surprised,” Pietrusko says, touching on a much-needed distinction between machine learning and other software that generates elaborate, formalist designs. “The point of the tool is to inspire design moves rather than trying to model the world.”
Mimi Zeiger is a Los Angeles-based critic and curator.