The More-Than-Human World of Entangled Others

In their creative pursuits, Entangled Others, an artistic duo, delve into the complex and expansive realm of nature, ecology, and the more-than-human (non-human) world, utilizing cutting-edge techniques such as 3D modeling, neural networks, and artificial intelligence to bring their visions to life.

from the Intertidal Samples series

from the Intertidal Samples series

When we talk about creative imagination, Sofia Crespo poses a thought-provoking question: Can we envision a color that we have never seen before? In a similar vein, can we visualize futures that we have yet to witness?

Crespo argues that although our minds may struggle with these concepts, we can still attempt to construct stepping stones toward the unknown. By pushing the boundaries of our imaginations, we can challenge ourselves to break free from the confines of what we already know and explore new frontiers.

Through Entangled Others' artwork, we are invited to question our limited perception of the world and are offered a glimpse into the vast possibilities that exist beyond our natural habitat.

As Feileacan McCormick explains, “Our intention is to show how interconnected the world actually is through works focused on specific subjects and themes that attempt to unveil, and make tangible as an experience, the innate entangled nature of the world”.

We’re proud to present an in-depth interview with Entangled Others as they take us through their practices, opinions on the current state of AI, and their overall mission as a studio.

On the left, Sofia Crespo - on the right, Feileacan McCormick


Can you describe your practice and the tools you’re using?

Feileacan:

We're an artist duo that began collaborating in 2020, and we now share a studio practice where the focus is on the natural (more-than-human) world - nature, ecology, and finding ways to use technology (though not exclusively) to create experiences and works that help us connect with and explore the wonders of the more-than-human world, as well as our relationship to it.

Sofia:

We are trying to create interactions that sometimes speak about ecology or make us question our relationship with the non-human or more-than-human world.

How long have you both been practicing with these tools?

Sofia:

I was already working with AI and interested in the intersection between biology and AI in 2019, and then I met Feileacan whose background is in architecture and design.

Feileacan:

In 2018, I left architecture to focus on arts, as I had originally done my thesis in art & architecture and wanted to get back to that. I worked a lot with 3D scanning, game engines, and creating experiences of nature through recording and building virtual worlds, culminating in physical prints (as well as digital experiences).

exhibition at MAXXI National Museum of XXI Century Arts

When I met Sofia and we started collaborating, she introduced me to the world of AI, machine learning, and generative art, which turned out to be a pretty good fit for me, and where I wanted to go artistically.

Can you share with us the story of how you both teamed up to become Entangled Others?

Sofia:

I had been making art for a while, but in 2018, I went to a workshop about learning to generate images using AI. Before that, I was already interested in exploring AI, despite having no engineering background. I didn't know if it was possible for me to ever use AI as a creative tool until that workshop, and it blew my mind. After that, I wanted to learn more and did so online.

In 2019, I met Feileacan when I was trying to develop a project going from 2D to 3D, and I had no idea how to approach it. Feileacan had heard about machine learning but didn't know how to use it in practice, but we clicked artistically, so we started collaborating. He helped me come up with a solution to create what I wanted, and that’s how we got started.

In 2020, when the pandemic happened, we decided to solidify our collaboration into its own thing, creating our little studio as an artistic duo. We decided to focus on speaking about ecology and our connection to the natural world, exploring how creatures are perceived as others.

a render of generated, speculative submarine life - 2020

Feileacan:

I initially worked as an architect for a few years, even running my own company, but it didn't quite work for me in terms of creative fulfillment, so I transitioned to art full-time. Then I met Sofia and was introduced to generative art, which turned out to be a pretty natural fit for the way I like to approach executing concepts. That worked quite well.

What would you say your overall mission is as Entangled Others?

Sofia:

I'm trying to reconcile my definition of art with what it means to me to define the trajectory of the studio. Because in a way, it's hard to give art functionality or say what role it plays, but at least for me, it comes from a place of self-expression and even healing.

I needed to work through all the ecological anxiety that I was feeling and I wanted to create a point of interaction that left me feeling positive and hopeful, but not just in an overly optimistic way. I wanted to find a balance and process my point of view with nature.

The mission is also to start the conversation or ask questions about what it means to create representation for the more-than-human. Within the current technologies that we have available, what themes or subjects do you explore?

But at the same time, we're also talking about bias in AI. For example, we have projects that are specifically about the limits of the data we have available about the natural world. And so, a lot of our work is also on asking questions about what it means to digitize nature and to take data and create richer datasets that have a more accurate representation of creatures. So, yeah, I think we talk about nature, but in relation to technology as well.

Feileacan:

Our overarching goal is to show how interconnected the world actually is through our works and to help us understand it by creating experiences and focusing on specific themes and subjects.

Our practice is experimental and we're trying to find ways to nurture the understanding that everything is interconnected and that we have a responsibility as equal coexistent entities to care for ourselves and Others, to find a better way forward.

A lot of our focus has been on artificial life. Not as a way of exploring the replacement of nature or a mere recreation of nature, but rather to examine the world around us through it. Very often the digital is seen as a separate, platonic realm from the world we interact with. But in reality, the digital is as much a part of the physical ecosystems of the world as other, biological inhabitants. Therefore we've been exploring how to bring experiences of the more-than-human world into the digital and start from that point of view.

Also, there is a rich tradition of artificial life going back many decades and we've seen time and time again how research applications of artificial life are actually relevant to helping our understanding of the actual physical world, and this is a very interesting tension to work with (and from). We don't necessarily have to be in the natural world to have a valid experience of it, and we can start a lot closer to home, so to speak.

That's why a lot of our early work was trying to build experiences that start where people are, rather than trying to drag everybody out into the woods or diving, which also doesn't really scale.

I am curious to know more about how you two came together to form such a unique and inspiring collaborative partnership. Do you have individual roles?

Sofia:

We don't really have assigned roles. The idea is that we are both fully involved in the artistic process, that we both have a say and because art can be deeply personal, it's like we got used to working together more and more and finding ways to communicate what is important to one of us and what is important to the other.

We do mind maps to help align our interests in a project and to orient ourselves. Of course, we can't both be doing everything together all the time. Sometimes we have to be like, 'Okay, you take care of this. I'll take care of that.’

We discuss a lot of our projects whilst walking. We actually just go for a walk and talk about something. So that's kind of our modus operandi in a way, having a walk in a park, or taking a day to hike in the countryside. Feileacan always carries a little sketchbook that he uses for taking notes and sketching in, he’s really good at that.

Feileacan:

On an artistic level, we're both quite different in personality but also come from different backgrounds. So that means that we often see things slightly differently, which is a great strength because we have a back-and-forth stemming from our different skill sets and perspectives. We've often been quite complimentary in that regard.

Sofia's an avid diver, so she'll often come back from a dive with new ideas or experiences that she wants to bring into our works. Or I might sit down and sketch something out, or test out a new way of doing something, and then that becomes something that we can turn into a new work together. It's kind of this natural back and forth that allows us to work both individually and together.

Time After Vessels #79 by Sofia Crespo - inspired by diving

Sofia diving in the Adriatic Sea

How do you see the relationship between ecology and nature in your work?

Feileacan:

Working primarily digitally in the context of ecology and nature is something we do because the tools available to us within the digital space give us certain affordances. For example, a pencil affords us certain ways of rendering the world and shapes how we perceive it, such as light and shadow. A camera allows us to record in certain ways, but there are similarly potentials and constraints with these digital tools. Generative systems and tools like machine learning (AI) allow us to work at a different scale when it comes to data and intuition, an affordance that suits us well creatively. By working with machine learning, we can distill the great quantities of data. 

When Sofia dives and takes countless images, we can then train a neural network, which extracts essential patterns or qualities, and we find different ways of seeing the more-than-human that we otherwise might take very much for granted. When you sort of just walk around or swim around, you might not necessarily be able to pick out these underlying patterns or visual essence.

Working digitally allows us to explore the more-than-human world in a way that wouldn't be possible as individuals working by hand. It's a different scale and a different level of intuition we can work with, which we find to be very productive as opposed to just going out and photographing or painting the world.

Sofia:

Also, there’s a divide between what's considered natural and what's considered artificial or unnatural, and I really like this place of intersection where it's not clear anymore what is natural and what isn't. If you make a 3D scan of a tree, for example, it's not so clear anymore because it's a 3D scan, it's data, but at the same time, that data comes from something that we consider natural.

Our experiences are not completely detached from our digital experiences and our non-digital experiences, they're both entangled. They are in a kind of feedback loop and they don't exist separately from each other. And simultaneously, the artificial and the natural don't exist separately from each other.

Can you explain your creative process in more detail?

Feileacan:

This is kind of something that varies a little bit, depending on the context. We have fast and slow work. Quantum computing for example, is a conversation that started over two years ago, but only about nine months ago did we finally find the right time and resources to put the pieces of the puzzle together.

That's become a project now that has just been released into the world. On the one hand, we have these long-term R&D works that are usually in-depth and take a long time to reach some kind of maturity. We normally don't share much about them underway, very often there’s a lot more happening than we are able to post about online.

Then of course we also have the much “faster” works where we just sketch and play a bit more freely. This is often where we try new things, or explore iterations of works that touch upon qualities, or aspects that the previous iteration couldn’t reasonably contain.

It's usually all within the same thematic umbrella of nature and ecology in various ways. We're ever-evolving in how we articulate that, as well as the approaches that are relevant or interesting to us. And of course, it's fun to sketch and play as well, especially if there's a sudden deadline.

Sofia:

It has changed with the years. When we started, we were more playing around with things and seeing what happened. But now we've become a lot more purposeful about what we do.

Now we tend to come up with an idea or a topic that we're interested in exploring, then we think about what could be the materialization of this idea, and what technique could fit. We’ll also research techniques that we've never worked with before, like quantum computing. We first had an idea and felt like was something we wanted to make use of as a means to expressing that idea. Then we thought about how we could realistically develop the project.

Of course, it’s nice to play without any idea because sometimes creating with the idea first and the technique second takes a really long time of development. Sometimes it's amazing, but on a day-to-day basis, there are long-term rewards and short-term rewards, and it's good to keep some little short-term reward present as well.

Why did you choose to explore the world of ai?

Sofia:

I originally got interested in AI from a perspective very different from what we're working on now. Initially, I wanted to see what patterns it could extract from my browser history. I was interested in looking at myself and the content I was browsing, and also thinking about the idea of mental health in that context. Could an AI learn from that? What does that mean for mental health issues? So that's very different from what we're doing today. But that was the beginning for me, personally.

This starting point was back in 2017, of course, I didn't have the tools that I have now to work with. Not to mention the landscape of AI art was very different from what it is today, even though it wasn’t so long ago. From that start, however, my interest really shifted: I got less and less interested in talking about the brain or mental health, and more interested in creativity and what artificial creativity means. For example, what kind of forms can we imagine using it?

That's what initiated my first artistic series in this new direction, called ‘Neural Zoo’. This was also when Feileacan and I first met, as it was that series that Feileacan saw, and retweeted, which thanks to that, we met.

from the ‘Neural Zoo’ series

Feileacan:

I knew AI existed, and had seen images of DeepDream online, but I had never really seen any personal relevance to it in relation to the work I was doing at the time. This meant that when I came across Sofia’s work online, I had no idea how it was done, but found it was amazing, they had a certain tactile sensibility that I really liked.

As she started introducing me to the world of AI, that's when things started to click for me, and generative tools and systems suddenly unfolded as relevant. What I found especially interesting was how GANs have this quality of visual distillation where patterns are extracted from a dataset. For example, if you train a neural network on images of jellyfish, what you end up with isn't actually a jellyfish, but something that has an essential jellyfish-ness to it.

from the ‘this jellyfish does not exist’ series

If you've successfully created the data set and training process, you end up with something that, if you look closely, isn't a jellyfish. Speaking with a marine biologist confirms that mostly what one gets are rather freaky hybrids between different types of jellyfish. So the resulting image isn't a jellyfish, but rather, something we recognize as the visual essence (to us) of a jellyfish.

This aspect of distillation became for me a way of creating a different kind of interface and a feedback loop where, the process of creation, and curation that goes into creating a dataset (one that involves continuous decision-making, both conscious and unconsciously) is ultimately externalized through the training of the model, confronting you with the results of those choices.

If you're sorting through a thousand, ten thousand, or a hundred thousand images that you've created for a data set, it's very hard to get a sense of the essential patterns or tendencies that define it. That changes when you've trained that in your network, you're suddenly confronted with this tangible distillation of the dataset's features. For example, in the case of ‘This Jellyfish Does Not Exist’, we hadn't realized it during the process, but once we had the final model we discovered we hadn’t created a model of jellyfish.

from the ‘this jellyfish does not exist’ series

Rather, we had captured but one stage of the life cycle that is what we commonly associate with jellyfish, however, the jellyfish goes through over seven life cycle stages. A metamorphic life cycle that makes a butterfly pale in comparison. Not to mention the fact that some species can regress to a previous stage, making them theoretically immortal. A rather mind-blowing fact, honestly.

from the ‘this jellyfish does not exist’ series

This growing understanding that these tools allow us to create feedback loops where we are confronted with ourselves in a very different way, and how that allows us to actually see the world around us a little more clearly was really quite important to me.

A parallel is a microscope, one of Sofia’s life-long passions: (they) are a piece of technology that brings us closer to the world because they allow us to see what the naked eye cannot, and which would otherwise be entirely ignorant of. So, even though it's a piece of technology, it actually brings us closer to the world. It is this open-ended potential of technology that we attempt to utilize in our practice, to create through technology attempts at nurturing connections with all that surrounds us.

‘Amylococcus’ from the ‘Artifical Remnants’ series

In the seriesArtificial Remnants’, we were attempting to create a speculative study of artificial insects that did not exist. One of the interesting things was how humbling the process was. Every time we thought we created something far out, or novel, we would be immediately brought back down to earth merely by opening Twitter and seeing an insect posted by a researcher posted that was so much further “out there" in terms of form, morphology, function, patterning, and behavior. This was, of course, a consequence of just how little data we have on the more-than-human world.

Aiiiii: The Book of Sand 2021

Even though we can create all these fantastic things, the reality is that we are not really replacing, or even coming close to improving upon anything. Rather, technology is useful as a means to get to know the world better. From the point of view of our artistic practice, we are not interested in technology for the sake of technology. It's fun, but it would make little sense to just go around doing tech demos.

For us, it's always been about trying to find the right kind of tools. That's why we don't always use AI, but rather we try to find, from the point of the concept or experience we'd like to touch upon, what makes sense or what helps us push the concept the furthest within any given frame of time, resources, or deadlines.

a specimen from the 3.0 generation, as observed in the virtual (natural) space

What are your favorite tools to work with regularly?

Sofia:

We don't use the same tool for every project, but we do have some algorithms that we're really emotionally attached to. A few techniques that I'm attached to are the first implementation of neural style transfer. That was the first AI tool that I used, and I'm still excited about it, even though few people use it today.

Another technique that I'm into that is not digital, but it started in the 19th century and a lot of artists still use it called the cyanotype. I like using it to print my own digital works because I enjoy the whole process, mixing the chemicals by hand, exposing, and treating the resulting prints. Also, it's not toxic, unlike many other photographic techniques. So those are two techniques that I'm really into.

Feileacan:

We've spent a lot of time building and developing, especially with GANs, and similar generative network architectures over time. Working with GANs became a natural starting point. Over the years now we've been building more and more our own toolkits around them, evolving them to fit new ideas, and avenues of research/experimentation. Even though you have all these newer, fancy architectures such as diffusion, large language models, and transformers, we still find meaning in continuing to use so-called “obsolete” architectures.

For example, our most recent work, ‘decohering delineation’ is our first released foray into experimenting with quantum computing, but that is a project built using GANs as a core, not for the sake of using them, but due to them working best for that concept and for the way we want to execute it.

from the Decohering Delineation series - REALTIME at Nxt Museum

from the Decohering Delineation series - REALTIME at Nxt Museum

We have other explorations in the pipeline that use very different architectures, so our toolkit evolves through creating works, in that sense we're really quite agnostic. It's all about what the concept is and how can we articulate, and nurture it beyond what we could imagine. Most of the time, it still starts here with a sketchbook, and then from there, it becomes explorations and experiments with datasets, code, and serendipity.

Are there other tools outside of AI that you use or would like to use in the future?

Sofia:

Biomaterials are materials that come from organic matter, for sculptural purposes. That's something that we've been really into as a line of experimentation over the past few years, but they're extremely challenging to work with. There are very few people fully invested in developing biomaterials, something I hope that in the future will change. It's not a tool specifically, it's more like an area of research that we are into.

We have, for example, grown mycelium in order to make a lamp out of it. This was in 2019. It's hard to think of mycelium as a tool, you know? Because it's a living entity.

mycelium 2019

Feileacan:

It depends on the medium. We've built tools around Blender for more three-dimensional work, which is a delightfully open and hackable foundation. We also work with P5.js for more generative stuff, and as Sofia mentioned, biomaterials. In general, we’re moving more and more into the physical. Not just working purely digitally, but trying to work in a hybrid manner.

That's what's been on the more R&D end of things over the last couple of years, and it's been intensifying more and more. We hope to have the first largest application of that finished towards the end of this year.

It's a hybrid in the sense it is both digital and physical, but we're trying to find ways to also articulate both sculpturally, but also conceptually, this entanglement of the digital and physical. We've wanted to push this avenue a lot further due to the fact that a lot of our concepts invite that kind of approach when we start exploring entanglement, and the hybrid.

Naturally, that’s more on the ambition end of things. We've been slowly building into that, but it's very much a transition from working with software to working physically. And since it's only the two of us, we've also had a certain constraint to how fast we can learn things.

One thing you mentioned working with was quantum computing.

How was that experience?

Feileacan:

We've had a lot of headaches just trying to understand the basics, but it's something we've been wanting to explore for a long time, so we’ve persevered. The barrier to entry has been quite high, but last year we finally found some concepts that felt very natural to articulate within the context of quantum computing. We set aside the energy to really dive into that, and it's been very technical. We're not able to do everything ourselves, so it's been a slow exploration.

It's exciting to think of quantum computing in art and where that's going to go because it’s still very much under the radar at this point, with very few artists using it. The space is in its very early stages, but there's so much possibility.

Just to mention the artists we know of who have been working with quantum computing: Libby Heaney is one of the most well-known artists working with quantum computing (and who has definitely been a huge inspiration to us to pursue this direction), not to mention David Young, and Pindar Van Arman as well.

Sofia:

It’s been a slow process, building tools that didn’t exist, or that are now in the process of being built. For our most recent project, we wanted to entangle several GAN models, using climate data as the driver, so that the physical world could exact an influence upon the digital specimens which inhabit the regions sampled, but without them being merely a “visualizer” of the data, but rather an interaction between the artificial and the shifting changes in weather, and life in the world around us.

This has also been a challenge, not just technically, but also in how we manage to balance creating experiences that can be experienced as-is, but upon closer inspection reveal multiple layers that speak to the many moving parts and focal points within the works. It’s not just about the difficulties of explaining the work to the general public, we are quite familiar with those challenges from our experiences with working with AI before it was a household topic, it’s also because a lot of aspects aren’t as readily intuitive. We’ve also spent a lot of effort in terms of framing the project in a way that can communicate to people why we're excited about it. Most of our works, and also the tools we create & use, come about from enacting experiments where we try to explore a concept.

Our interest in quantum computing lies stems from our explorations in which we're trying to find ways of breaking down the digital-physical divide, but also finding ways to integrate the more-than-human in more direct manners without it becoming problematic.

How have your toolsets and approaches to AI evolved over time? Do you incorporate other levels of AI into your process?

Sofia:

We keep adding more tools, but we also keep losing them as well because of the challenges concerning software dependencies that become incompatible as hardware and software evolve. The field is moving so quickly that, for example, software that we developed specifically for a project four years ago is now kind of obsolete because you can no longer run it. You would have to create a capsule that replicates the environment we had for running it in 2019.

The big issue with machine learning is that you're building on top of a lot of different libraries, in addition to running them on top of the GPU, which has its own drivers specific to it, so a new GPU and the software compatible with it might vary from generation to generation of hardware. You rely on all those pieces working together, and if one is broken, you have to find your way around or build anew.

So there's a lot of maintenance we have to do, and sometimes we have to think and say, “Okay, is it worth rewriting or reworking the code so we can run it in our current setup?”

It’s also very much an evolving practice, we used to do a lot of data normalization by hand, but now we have more tools that allow us to automate parts, for example, neural networks that perform specific transformations in a more content-based manner on a per-image basis, something that would take weeks by hand.

Feileacan:

When we started, it wasn’t really a very accessible field for artists that don’t come from a specifically technical background. There were very few resources, and it was a lot harder to build your own tools. Over time, as we've learned more, we've been able to build our own pipelines and tools and evolve them alongside the conceptual.

The common adage is that it's easier to run 20-year-old code than it is to run six-month-old code. In terms of machine learning it is very very true; even running something from last year can be problematic these days.

So we constantly exist in a sort of moving window of time where running projects we did three years ago could take a couple of months of an engineer's time to update/adapt the tree of dependencies and deprecated code into a modern hardware & software environment.

Can you explain what working with a "neural network" means?

Feileacan:

It has to be said there are many different types of architecture, especially nowadays. So the way one works with them can vary because of that.

Basically, with all neural networks, you have this initial, and entirely crucial aspect of creating a dataset. You have to create and curate the dataset, which takes time and focus because you're saying that “This is relevant, this is not relevant”, “this is interesting, this isn't interesting”, etc.

Then you have the process of training, which nowadays can be anything from hours to months depending on the type of neural network architecture, the size of the dataset, and of course your hardware. What you are doing is letting the algorithm attempt to infer patterns and features from the dataset you created. For example, for ‘Critically Extant’, training took in total more than a couple of months in total.

The process of training results in a model which you can then run inference upon. By this we mean that you can use the model, in the case of generative models (which are the type we most commonly work with), to generate a series of outputs based on what the algorithm has “learned” from the datasets. One of the benefits of using neural networks is that you can generate a single output or thousands.

Caridina Woltereckae (Harlequin Shrimp Sulawesi)

There is an aspect of meditation to exploring outputs. Mainly because you can generate at scale. For example, if you have a GAN, you can generate a thousand outputs of jellyfish. And then you can start to ask questions, e.g. are these jellyfish interesting? Or is there something about the visual qualities that are especially interesting in this cluster of outputs, what's going on?

Or, as often happens, you discover that a different pattern than you expected emerges, which in turn might lead you in a new direction, or require that one adjusts the dataset or creates a new one. So this is a sort of practice of cyclical refinement, a feedback loop.

So very often, what we work with is a daisy chain from the curation of the data set to the final outputs. Often we work with multiple architectures, models, and pipelines to further refine the process.

Based on your experience working with these neural networks,  do you find that one network is enough to satisfy your artistic practice or do you create a series of them?

Feileacan:

It honestly depends on the work itself. For example, for the series ‘Chimerical Stories’, where we wanted to highlight the life-cycle of jellyfish, which very few really know about or have any real understanding of. For this, we only used GANs. But we developed a way of cross-breeding visual qualities from different models, allowing us to mix specific traits of different specimens. That was just for that project, though. For our practice in general, it varies.

from the ‘Chimerical Stories’ series

Sofia:

I think we don't focus specifically on one. The concept and how it's executed, whether it's a physical work or a digital work, or a hybrid, really becomes the deciding factor of the tools. Tools become secondary to the execution of the concept in that regard.

What would you say is the biggest misconception about working with neural networks and AI?

Feileacan:

That it (AI) is conscious. I mean, theoretically speaking, there is always the potential for future software architecture to have the capacity for consciousness. But for now, this is still science fiction stuff. As humans, we tend to imbue things with qualities that aren't there because they match a familiar pattern from another context.

A lot of the problems we're seeing now with large language models is because it's a textual interface, you're working with text, and normally, the only way you would get that kind of interaction is from other humans. So we tend to not really think about text as coming from a generative system. Rather, we sort of see it as something innately human. From our experience working with these algorithms, they might not be conscious or have any concept of what the data they learned from is (we can generate incredibly realistic images of cats, but the neural network has no concept of what a cat is in the manner we do), but as tools, they can still enable us to do some pretty amazing things.

I think there are a lot of misconceptions these days because of the popularity and hype around these tools like DALL-E, for example, that falsely give the impression that all AI is just writing a text prompt and hitting a button. It's nice that there's been so much focus and talk about AI as part of a creative toolkit, but it's become a very monotonous perspective on what AI tools are in terms of artistic practice, what they can do, and what they require. Luckily, we're not the only artists working in varied ways with these tools. With time, once the hype dies down, there will be a more balanced image of what these tools are and can be.

Our work is not just typing a text prompt and calling it a day. While that's perfectly valid as an artistic practice, by the way, it's not the only way of doing things, and we should strive to nurture an understanding of that. 

Sofia:

I think that there's a lot of vocabulary around collaboration with AI, but I think it’s all very misleading. I know a lot of artists like to talk about their practice as collaborative when they're using AI. I respect that. It's totally fine if that's part of their artistic statement, but I personally think that it's misleading because there's this kind of larger narrative that AI is conscious, and our AI has the capacity of becoming conscious.

I've had the experience of giving talks where I went through my artistic process and the tool and the way that I work with the dataset. Afterward, people came up asking, "So you basically work with MidJourney, right?" I was like, "Oh, okay, so it’s still not clear even after all of that”.

That's another common misconception that all AI is made with corporate tools like MidJourne or DALL-E, and that's the only thing there is. There are a lot of AI tools, and there are a lot of artists that write their own software.

Can you elaborate on your approach to how you develop and refine these neural networks to execute your ideas?

Feileacan:

When we wanted to entangle the interference process of two GANs, for example, there was no off-the-shelf solution for that kind of approach to working with GANs. We spent a lengthy amount of time collaborating with an engineer to come up with various approaches to achieve that, taking into account the architecture of the neural network and how the use of quantum computing would work in relation to the concept itself, which structures, and informs the kind of entanglement intended. It was a process in which we spent months exploring different ways of solving for that specific kind of mechanic. Building, testing, and refining until that worked as an expression of the concept.

For us, it is very much whittling away at potential ideas and solutions until we have something that works and embodies what we are after. When we first started collaborating, Sofia said she wanted to work in 3D, but there wasn't anything like a 3D GAN available to us, especially not for us without a Ph.D. in computer science, and experience coding neural networks we simply didn’t have at the time.

We spent a long time brainstorming ways to push a 2D neural network to work in 3D. After a lot of trial and error, we came up with a way for a 2D neural network to produce 3D models, which became the first couple of generations of the series ‘Artificial Remnants’. By the time we reached the end of the second generation of artificial remnants, there were a whole host of new developments. We were able to incorporate some of those and combine them to create ever more complex and detailed specimens.

It was this process where we didn't know how to solve something, but we knew we wanted to solve it. We just kept exploring and playing and sketching different ways until at some point it worked.

This doesn't always work, of course, sometimes we spend a lot of time trying to build or develop something, and ultimately, it just doesn't work and that's fine. It gets archived, and that's that. Naturally, we don't share so much about those dead ends, so there are a lot of our efforts that never see the light of day. We have very much an experimental practice in that sense. We do a lot of experiments, and some of them don't work (either conceptually or technically, or a mixture of both). This happens and that's part of the fun of it too. After all, we are not here to build products or to have any kind of like one final, static output.

We’re constantly exploring and playing and seeing what happens.

How do you decide on the data sets for your work?

Sofia:

It depends on the work, of course, but for example, with the aquatic, we have used both data that we have filmed, where we literally recorded underwater diving with an aquatic drone, and we've also generated artificial data using generative 3D models. I do a lot of diving, and snorkeling, which means thanks to underwater cameras I am continuously creating, and expanding, our datasets.

Feileacan:

For ‘Critically Extant’, we used the biggest open, permissible dataset of life on earth. Containing some ten thousand species across nearly three million images. The idea was to enact a future recreation of currently critically endangered species as if they were extinct. We wanted to see how well we could recreate these species using cutting-edge technology, both as a way of showing that even the latest technology still has very clear limitations to what it can do, especially when we lack the data. These critically endangered species often have only a scientific name and the date of their last evaluation. Using neural networks as a tool here helped us ask the question, using the data available to us, how well do we know the natural world?

Piliocolobus Pennantii (Pennant's Red Colobus) from the Critically Extant series

And as it turns out, not very well at all. Approximately three million images across 10,000 species is basically a drop in the bucket compared to some 1.2 million species cataloged so far. Meaning that even the biggest, open dataset is entirely inadequate for a realistic recreation of what these species might have looked like. That was part of the project, to have this performative aspect show that the hype around big data & AI actually is very much inflated and that we know actually far too little about the more-than-human.

And that’s the reality of it, especially when it comes to the more-than-human, we know next to nothing. So we should strive to be a little more humble about that and take that into consideration.

‘Dawn Sensors’ from your Hybrid Ecosystems collection, evokes a lot of emotion. Can you talk a little bit about this piece and the meaning behind it?

‘Dawn Sensors’ from the ‘Hybrid Ecosystems’ collection

Sofia:

One day, I saw Feileacan creating an image that looked like a field. We used to go on a lot of bike rides to nearby forests back then. The image he generated looked kind of like a circuit board with trees. This concept fit perfectly with the ideas we had been talking about for a while - the connection between the natural and the unnatural (or artificial).

Then we started to pursue the series more. The way I see the series is largely a combination of our digital reality with the things we see outside when we go on a little field trip.

'kelp systems' from the ‘Hybrid Ecosystems’ collection - sound by Alejandro Mune

Feileacan:

We wanted to try something different with ‘Hybrid Ecosystems’. A lot of our previous works had been focused on mapping out the boundaries of our knowledge and the representation of the modern human world around us. But with this series, we wanted to cast our vision forwards toward a more ecological future.

Often, visions of a more ecological future involve the status quo with some shiny yet non-existent technology with lots of greenery on top. Our (popular) visions of what a more ecological world can be are worryingly monotonous and not very creative. This series here was meant to try something different. Starting, as Sofia always likes to point out, you can't really imagine a color that you've never seen before. Which entails we can't really imagine futures we haven't seen before. But that doesn't mean we can't try, or try to build stepping stones towards something we don't know and haven’t yet experienced.

from the series ‘Hybrid Ecosystems’

We wanted to combine and recombine the familiar in ways that touch upon a harmonious form of coexistence between the digital and physical, in a positive and aesthetic way. It's not a concrete proposal for the future, but rather an attempt to start the process of stimulating our imagination of what a better ecological future could look like.

You have been a part of many exhibitions and galleries since 2020 and have exhibited your work in exciting ways. From the aquatic silos in ‘Beneath the Neural Waves’ to the Times Square takeover with ‘Critically Extant’.

exhibited at Aiiiii: ‘The Book of Sand’ in 2021

Which would you say was your favorite or the most exciting way to see the work exhibited in public?

Critically Extant in Times Square, New York City

Sofia:

I think two of the most exciting moments wereCritically Extant’ being displayed in Times Square, which was incredible because it's a public intervention in a way, taking over screens that are normally used for advertisements that are extremely expensive for brands to get a spot on.

‘Critically Extant’ at EP7 in Paris - 2022

It's just incredible having the chance of using 96 screens at one time, even if it's just for three minutes a day. To use that to talk about the very specific topic of bias in data sets, lack of data, and critically endangered species. You know, it's just like an incredible way of exhibiting the work. But another one that was really exciting for us was the inflatable sculptures in the part of a show we did in China, in Shanghai.

‘Inflatable Sculptures’ on display in Shanghai, China

It was really, really cool because insects are normally these tiny things that we're used to thinking of as small. And in this case, we had the chance to create an inflatable sculpture that was about eleven meters tall. A size that makes humans feel pretty small by comparison.

We had an exhibition open at the end of March 2023 which was also special for us. It's the first time we exhibited our work at a botanical garden, which has been on our bucket list for a very long time.

‘sediment nodes' as part of Sonar+D at the Estufa Fria botanical garden in Lisbon

We did a takeover of the botanical garden here in Lisbon, with the help of curator Joana Seguro, called Estufa Fría, and collaborated with an artist called Ana Quiroga, who composed a soundtrack for the whole experience. It's a 20-minute journey where you're walking around, listening to the soundtrack, navigating through the botanical garden, and all these screens that are placed around.

‘sediment nodes' in the wild at the Estufa Fria botanical garden in Lisbon

How important is presentation and display to your artwork?

Sofia:

I think it's super important because a lot of our work has a very strong connection to the physical ways of displaying it. We feel that our work often isn't just a digital piece meant to be seen in a browser. We really enjoy sharing our work in the context of an exhibition, trying to create a specific, physical presence for the work. The process of creating sculptures or picking a display for a specific piece, and considering the journey that people will have exploring the space is all super important to us. It becomes a part of the work in the end.

Your approach to presenting art in our physical reality is refreshing.

What are some novel and innovative methods of exhibiting your work that you’d like to experiment with in the future?

Feileacan:

I have a notebook full. There's a lot of research still to be done, and so we’re slowly building towards more physical things, but not in a primary static sense.

Sofia:

I would like to have an exhibition at the Natural History Museum in London or at the Kew Gardens too. Those are two places I first saw in David Attenborough's documentaries, and I was like, “Oh my God, it would be incredible to make a show here”.

Also, I think it would be amazing to just make an installation that's more of an intervention in the middle of the forest, someplace that's not so accessible, but in return becomes uniquely context-specific. There’s this return to where the original inspiration came from with that approach, connected to land art in a way.

In your opinion, what has been the most impactful piece of art you have released to the world so far? If you had to choose one, which collection embodies Entangled Others and why?

Sofia:

I think ‘Artificial Remnants’, but I'm very attached to it because it was kind of our first bigger project. What do you think, Feileacan?

Feileacan:

I think ‘Artificial Remnants’ is definitely a strong candidate for that. ‘Sediment Nodes’ also has some embodied qualities, but maybe more on a conceptual level. They embody different aspects of our overall practice.

Sediment Nodes 01, 02, 03

Wilton Park Dublin: Living Canvas - 2022

Sofia:

Maybe ‘Beneath the Neural Waves’ is the project that more identifies what we do, right? It speaks about data and a lot of the interconnectedness of things.

Feileacan:

It's actually, in some ways, our most unrefined series also because it's the one that's most experimental and still in progress.

Sofia:

It was one of the hardest works we've done.

Feileacan:

It's also one of those that still is brimming with potential and sort of unexplored potential for growth.

Sofia:

It was hard to develop that series because we wanted to create an artificial coral reef. When we start thinking about the problem of the dataset, we had to figure out where we're gonna get the 3D data of a reef to train our models, and how to get all inhabitants of the coral reef equally represented.

There were a lot of problems to think about, and we had to create artificial datasets in the end, so the results are interestingly more a distillation of what we know about coral morphologies, rather than the data collected (which in terms of 3D is almost none at all).

‘Beneath the Neural Waves’ on display - part of the "arte e inteligencia artificial" exhibition

Feileacan:

We partnered up with an artist called Joel Simon, who had written a genetic algorithm for 3D coral growth, and we used his algorithm to generate coral 3D models to become the dataset. Working around the limitations of available data became part of the artistic statement, but it also goes to show how experimental and challenging the process was. Nor was traveling to coral reefs and attempting underwater 3D scanning feasible for us, in terms of resources, and complexity.


For inquiries, please contact hello@artxcode.io.

Previous
Previous

Luke Shannon: Bridging Digital and Physical Art

Next
Next

Ix Shells and the Future of Sound and Generative Art