An Introduction to Christian Burke: Data Scientist at Refik Anadol Studios


Listen to the episode on Apple Podcasts, Spotify, Overcast, iHeart, PlayerFM, Podchaser, Boomplay, Tune-In, Podbean, Google Podcasts, Amazon Music, or on your favorite podcast platform.

Read the Show Notes

Watch on YouTube


[00:03] BW: Hello and welcome back to Pixels and Paint. We have a very special guest today, Christian Burke of Refik Anadol Studios. Christian, can you introduce yourself for listeners who may not know you?

[00:24] CB: Certainly. Hi, nice to meet you all. I’m Christian Burke, the lead data scientist at Refik Anadol Studios. I started working for Refik in 2018 when I was a sophomore in college. Over the past five years, we’ve led various AI, art, and web projects. I’ve primarily managed the data collection, data processing, and a lot of the backend work for our art pieces, especially on the machine learning and AI side.


[00:53] BW: How did you first connect with Refik?

[00:56] CB: We actually met randomly. He asked if I could do data collection, like downloading the internet and collecting images to train AI models. I agreed, and the rest was history.


[01:14] BW: So, you were already studying data science at the time?

[01:18] CB: Yes, I was at Duke University studying computer science.


[01:23] BW: When you began your journey in data science and computer science, what did you envision working on?

[01:31] CB: Initially, I wasn’t certain. I started teaching myself computer science around 12 or 13 years old. I was drawn to the accessibility and abundance of information available online. It empowered me to learn and apply anything I wanted. I originally imagined working primarily with web design, website building, and back-end servers. However, I found my way to data science when the opportunity presented itself.


[02:19] BW: Were you studying data science specifically when you started working with Refik?

[02:27] CB: No, I was only a sophomore in college and hadn’t delved deeply into specialization. I’d taken entry-level courses and was involved in statistics, math, and probability. I was learning data science, but it wasn’t my explicit intention from the outset.


[02:46] BW: That first summer with Refik must have been significant if it influenced you to change your major and continue working with him.

[03:00] CB: Absolutely. It was an invigorating experience. I had previously dabbled in data collection and processing. For instance, my first program fetched stock ticker data from Yahoo finance and analyzed those numbers. So I had some interest in data collection and visualization. That summer, Refik tasked me with downloading numerous images of New York for an art project. In 2018, I gathered about 153 million images of New York City. At the time, this was the most extensive dataset ever utilized for artwork.


[03:46] BW: Do you have any background in the arts?

[03:50] CB: Yes, I was quite involved in drawing and painting during high school and was an active participant in my school’s art program. I even won a scholastic art award among other achievements. However, in college, I couldn’t allocate much time for personal artistry. I did study ancient Greek archaeology, focusing on patterns on pottery and burial distributions. This kept me connected with the art domain, but I primarily concentrated on technical subjects.


[04:30] BW: Have you tackled any personal art projects since leaving college?

[04:43] CB: That’s a tough question. I’d say no, not directly. However, I believe everything in life can be interpreted as art, including writing code. There’s an inherent style and artistry to it. While I may not have created traditional art myself, I do see a strong artistic component in web design, programming, and such. Working closely with other artists, providing them data, and ensuring everything looks cohesive has its artistic elements.


[05:18] BW: I want to discuss Unsupervised, your piece in the MoMA. How did that collaboration begin?

[05:38] CB: They first approached us in 2020, expressing interest in the web3 world and our previous work. The exact context escapes me. In 2021, we launched the NFT project with MoMA named “MoMA Unsupervised.” We released about 6,000 NFTs: an edition of 5,000, nine editions of 100 pieces each, and a few unique pieces. We initiated this collaboration amidst the NFT boom in 2021. While the NFT realm provided support, there was no equivalent representation in the physical world.

There remain uncertainties regarding data usage, AI, web3, and what the museum should accept. After the successful NFT collection, MoMA reached out about a year later. They appreciated the NFT collection and proposed a physical installation in their lobby. They granted us access to high-resolution data from their collections, which enhanced our model. We then installed a 16×16 meter display in MoMA’s lobby around November 2022.


[07:23] BW: Yeah, I’ve been there and it is really amazing to stand in front of the biggest digital art screen I’ve ever seen. How did you get a screen that size? Where did it come from? Did you or Refik decide to go this big?

[07:46] CB: Yeah, we’ve been installing big screens around the world for quite some time. This isn’t new to us. Most of these screens are made from custom panels. If you get close, which you probably can’t see when it’s on, there are littler panels stitched together. This screen is 16 meter by 16 meter. We’ve done ones even larger, but what’s special about this screen is its resolution. I think it’s a 2K or 3K resolution screen. So it’s really nice to have something this high quality and large.


[08:29] BW: How has life at the studio changed since that piece debuted?

[08:35] CB: There’s been more public attention, and also some controversy. With topics like web3, AI, and digital art, there’s always some debate. Many people come to the studio praising our work, but there are also those who critique digital art. However, the feedback has been mostly positive. The installation was set to be up for three months, from November to March. But due to its reception, it’s been extended several times and is still up. What’s meaningful for us is seeing people’s emotional reactions to our art. Watching children play in front of it, or loved ones sharing moments—it’s a touching experience. We’re glad to have shared that with so many.


Emotional Responses to Public Artworks

[10:00] BW: Yeah, I have a question that I’ve been asking recently that some people have a hard time answering. I’ve seen some of the negative reviews of Unsupervised, calling it something like a fancy lava lamp. I think that was the New Yorker or the New York Times. What does art do? Why make art at all?

[10:35] CB: It was one of my favorite quotes to ever read, calling the unsupervised piece an intelligent lava lamp. For us, art is all about the human impact. We’ve focused on public art from the start of the studio. When it comes to the actual meaning and purpose of art, people take away whatever they want from it based on their own perspectives. What we really focus on is the human experience, allowing people to feel emotional reactions. The MoMA is a great example. We did another piece at KÖNIG GALERIE back in, I think, 2021 or early 2022. It was in a brutalist architecture museum with a massive screen at the end of a long chamber. People had intense emotional reactions there. Some cried tears of joy or sadness. For us, the human impact of art and allowing people that space to heal, grieve, or feel is what’s important.


[12:14] BW: When were you most surprised by somebody’s reaction? You’ve probably seen a range of reactions by now, but what was the one that most struck you?

[12:33] CB: It was at KÖNIG GALERIE. A couple who had lost a loved one recently were sitting there. They had such an intense reaction that they were crying together, appreciating the artwork, and feeling the emotions. It was a profoundly sad and beautiful moment.


Data Ethics

[13:02] BW: There’s a popular meme of sentiment I want to pivot to. It goes, “I thought machines would relieve us of manual labor, but instead they’re painting and writing poetry, while I’m worried about losing my job.” Why have the arts found themselves seemingly in jeopardy first?

[13:37] CB: That’s a really interesting question. First off, I don’t really feel like the arts are in jeopardy from AI. AI removes many technical limitations for participation in the arts. While I’m not a talented illustrator, I can use AI software to create illustrations. But artists have a unique ability to discern what’s attractive and appealing. I don’t believe you can replace the artist with AI. AI replicates what it has seen before; it doesn’t innovate or think like an artist. An artist always remains the most vital part of the process. When we use AI, there’s a strong influence from artists. AI lowers the technical skill barrier and increases productivity. For instance, a concept artist could produce hundreds of drawings per day instead of just a few.


[15:34] BW: I interviewed a concept artist, Andre Riabovitchev, who worked on the early Harry Potter films, and he’s really embraced AI. Another artist here in Portland, Oregon, Chazz Gold, suffered a brain injury and lost the use of his right hand. He turned to photography, but AI became the tool that let him create as he did before his injury. I’ve seen firsthand how AI has changed artists’ lives. What’s your view on artists’ training data? It’s a controversial subject.

[16:53] CB: Training data, like using art pieces to train a model, is a complicated issue. Data ethics in AI is a sensitive area. We’re concerned about data sourcing, its use, and online collection methods.

If someone fine-tunes a model on a specific artist to mimic their work and publishes it, that’s not the best use. The MoMA gave us rights to use their data to create our own artwork. Training a unique model and applying various processes meant we weren’t copying another artist’s work. It’s about fair use: taking something and modifying it enough to make it new. We can mimic someone else’s work with AI, but we can also diverge. The MoMA was significant because they recognized our work as new and different.


Discovering the Yawanawa

[19:00] BW: I want to discuss some specific Reef at Ganadal studio pieces, especially the Winds of Yawanawa. Can you tell me about that piece?

[19:17] CB: The Winds of Yawanawa is a collection we created in collaboration with the Yawanawa tribe, an indigenous group from the Amazonia. We’ve been connected to them for a few years. Refik has visited them multiple times and deeply appreciates their culture and practices. For this project, we used patterns from their jewelry, fabrics, and clothing to create a new piece of work. Importantly, 100% of the proceeds go directly to the Yawanawa people. They’re using these funds to build a new village, well, and school. It’s been an inspiring project for us.


[20:15] BW: How did that relationship begin?

[20:18] CB: I’m not sure how Refik first met them. He somehow connected with them, expressed admiration for their culture, visited them in the Amazonia, and traveled extensively to spend quality time with them.


[20:44] BW: Can you share more about their culture and what distinguishes them?

[20:52] CB: They’re an indigenous tribe from the Amazonia, known for their herbal medicines and dedication to preservation. Their music and language are deeply unique. We’re currently focused on language preservation as there’s concern that languages from indigenous tribes are disappearing. We’re exploring ways to safeguard their culture and language. They practice distinctive herbal medicines and have a rich musical and linguistic culture. They also have representatives who travel globally to share their practices. They’re deeply spiritual and it’s truly inspiring to witness.


From Rumi to Mozart

[21:49] BW: You mentioned music and language. Are you working with audio, and what does that entail?

[22:00] CB: We work with audio, video, and text, which are our three benchmarks. Text encompasses some of the language projects too. We’ve been involved in audio for a long time, using AI-generated audio in our studio. Often, in our installations, the background audio is AI-generated. For instance, we recently worked on a project with Dvorak, training a custom model to produce AI-generated Dvorak pieces.


[22:36] BW: That’s fascinating. I recall a 1995 interview with Brian Eno in Wired Magazine. He predicted that someday you’d request a new piece by, say, Brahms, and it would be created for you. This vision seems to be materializing years after his prediction.

[23:09] CB: Yes, the advancements in audio models over the recent years have been significant.


[24:03] BW: At the 2022 PyTorch conference, you and Refik hinted at some works in progress, including a Rumi piece combining Dreams, audio, and text, and a Mozart piece reminiscent of a project by Maria Finkelmeier, who uses Unreal Engine for song modeling. What’s the status of these pieces?

[24:44] CB: Both of those pieces have been released as immersive exhibitions worldwide. Rumi, for instance, was showcased in Istanbul. It’s an immersive room where Rumi’s dance influenced the artwork, and we incorporated a rich image archive provided by the client. This project marked one of our initial uses of diffusion.


[24:52] BW: Can you tell me more about this?

[25:15] CB: We showcased this artwork globally. Traditionally, we’ve used GAN trainings, but diffusion has become prominent recently. This was one of our first projects using diffusion outputs. Rumi received an overwhelmingly positive response, especially given its cultural significance to Turkey. As for the Mozart project, it took place, I believe, in Germany. We utilized a similar approach to Rumi’s, incorporating generative audio from Mozart’s recordings. We also integrated a valuable image archive of Mozart-related content, examining text and old songwritings. It was another successful installation.


[26:06] BW: How did you incorporate text in the Rumi exhibition?

[26:11] CB: We created a model from the text, which influenced the fluid simulation in our artwork. In some pieces, there are large sweeping motions. We connected words from Rumi’s texts and other data using a network of nodes. A graph-searching algorithm was then applied to this network to influence the fluid simulation.


[26:39] BW: What is unsupervised scan or diffusion?

[26:45] CB: Unsupervised refers to GAN. What’s interesting is that it’s a real-time GAN. Inside the MoMA lobby, we added motion tracking, audio recording, and a weather station. For any AI model, you need input parameters to yield outputs. We harnessed real-time data from the space itself, using it as input parameters for the GAN model, which greatly influenced the art piece.


[27:11] BW: That’s innovative. I recall a mention in the PyTorch chart about an EEG headset for mental health.

[27:26] CB: Mental health is crucial to our work. Art often has therapeutic effects. We’ve been researching EEG data and recently ran a project at the MoMA where participants wore EEG caps, and we recorded their brain data while they viewed the artworks. This aimed to gauge the emotional and cerebral response to art. An exciting direction we’re exploring involves the EEG data from the brain cap. We intend to use it to guide the art experience. So, if someone reacts a certain way to one piece, we’ll tailor the subsequent content based on that reaction.


[28:39] BW: Are there partnerships with psychologists or medical institutions for this progression?

[28:50] CB: We had a medical researcher lead a recent project. While I can’t provide specifics about partnerships, having medical experts on board is crucial.


[29:08] BW: Do you have a timeline for when we might hear more about this? I’m intrigued by the intersection of EEG, mental health, and art.

[29:23] CB: We’re aiming to release many of these tools at the start of next year. We’ve been working on a project named Dataland. It represents much of our recent work, offering immersive experiences that highlight our capabilities. We’re planning its release in 2024, when more of our developments will become public.


Dataland

[29:47] BW: Could you elaborate on Dataland? Is it open yet, and how can people access it?

[29:56] CB: I must be discreet about it. However, we’re establishing a physical space in downtown LA, slated to open within the next year.


[30:09] BW: That’s exciting. I’ve often heard the term “latent space” concerning AI. Could you clarify its meaning for our audience?

[30:24] CB: Absolutely. Taking the GAN model as an example, when you train an image-based model like a GAN, you have input data consisting of real images and an end component comprising computer-generated images. Latent space is the intermediary area that translates the input data into these outputs. Picture latent space as a 3D field you navigate. Depending on where you are in this space, you’ll encounter various outputs. For instance, one region might produce images of flowers, while another offers images of buildings. Essentially, latent space is the bridge between input data and computer-generated images.


[31:38] BW: I’m familiar with the difference between diffusion and GAN from past research, but I’m a bit rusty. Can you clarify the distinction and the process of choosing between them? Is it akin to choosing between oil and acrylic?

[31:58] CB: Your comparison is apt. Diffusion models are text-to-image-based, so they focus on the prompts you use and the resulting outputs. GANs, on the other hand, are image-to-image-based. Unlike diffusion, there isn’t a direct input query in a GAN to obtain a specific output. That’s the primary distinction, although there are other technical nuances.


[32:34] BW: I’m intrigued by your work with AI and scent. Can you shed light on this area and its future direction?

[32:47] CB: AI in scent is indeed fascinating. Beyond just scent, our research has revolved around the “hyper multi-model” concept. We aim to explore different facets of a single concept – like a flower – through AI. So, considering a flower, we’d investigate its AI-generated image, scent, sound, and even texture. This multifaceted approach offers diverse ways to interact with models and derive various outputs. For scent, perfumes are crafted from a mix of bases and chords. Depending on an image’s representation in a GAN model, we can convert it to a chord series and formulate a perfume accordingly. A memorable example is a Bulgari installation in Italy. It involved a mirrored cube outside a Bulgari shop, where we introduced scent into the room in real-time. As viewers watched projected visuals, we had a scent machine delivering corresponding fragrances, such as floral during a flower scene or aquatic notes during a water scene.


[34:34] BW: How do you train a scent model?

[34:38] CB: Training a scent model is unique. Without revealing too much, in the AI realm, data can be converted into a number series, which is foundational to data science and machine learning. An image is transformed into what we call an embedding by processing it through a neural network. This embedding’s data can be used in various ways, essentially crafting a translator that turns an image vector into a scent.


[35:24] BW: So, can that embedding be input into scent machines to produce the actual fragrance?

[35:34] CB: Exactly. It’s akin to translating from one language to another. The image’s embedding is processed by the translator, which then instructs on the needed quantities of various chords to create the scent.


[35:51] BW: That’s incredible. The scent technology is mind-blowing. And you mentioned texture, can you elaborate on that?

[36:03] CB: Some aspects of that are still confidential, but we’ll share more openly at Data Land next year.


[36:14] BW: That’s exciting news! On another note, have you begun exploring Apple Vision Pro’s applications and its implications for art?

[36:28] CB: Apple Vision Pro is truly groundbreaking. The standout feature they offer is the quality of the visualizations. Currently, high-resolution VR requires a powerful PC and a top-tier headset, both connected. In the AI art context, display quality greatly impacts the overall experience. Viewing digital art in low resolution isn’t ideal unless it’s a specific style like pixel art. The Vision Pro promises significant changes in content presentation, and I’m eager to see its evolution.


[37:42] BW: So you haven’t started working with their OS yet?

[37:52] CB: No comment.


[37:55] BW: I’m trying to uncover a lot of secrets from you.

[37:58] CB: I’m sorry.


[38:05] BW: I’d like to understand the ideation process behind many of these projects. Do you begin with finding interesting data, or do you have ideas first and then seek the required data?

[38:25] CB: It varies by project. There are generally a couple of main types of projects we undertake. One is self-directed, where someone, often Refik, suggests training a model with specific data, like Coral data. We then search for and collect the necessary data and proceed with training. There are many online resources with open-source data available, Flickr being one example. Another approach involves client-specific projects. For instance, AT&T Dallas provided us with data they wanted visualized, and we developed visualizations based on that. These represent the two primary ways we categorize our projects and source our data.


[39:37] BW: Can you tell me about the AT&T project?

[39:41] CB: Of course. The AT&T project in Dallas involves a massive screen on the exterior of their downtown building. We collaborated on several pieces for this space. One involved unique visualizations of cellular data that were transformed into art. Another was for Clint Eastwood’s 50th anniversary celebrations, where we showcased images from his archive. The works are displayed on the AT&T Center’s external screen in downtown Dallas.


[40:17] BW: What’s been your favorite project so far?

[40:24] CB: MoMA is undoubtedly a significant project we’ve handled, both historically and personally. Another favorite of mine was the inaugural project I participated in, Machine Hallucinations New York City. It was my introduction and involved an impressive dataset that was quite innovative for its time. Nature Dreams at Conan Gallery also stands out due to its extraordinary exhibition in a unique space. And then there’s Machine Hallucination’s Coral Dreams, where we set up a giant screen on a Miami beach, offering viewers a unique beachfront experience.


[41:26] BW: As a collaborator with Refik, do you have personal projects you work on?

[41:33] CB: Given our busy schedules at the studio, I rarely find time for solo projects. I head the AI and data science team, as well as the web and web3 teams. A lot of my side projects focus on bridging the gaps between these teams to enhance the experiences we offer.


[42:02] BW: What’s the forecast for machine learning and artificial intelligence in the upcoming year?

[42:10] CB: The trajectory for AI has been unpredictable over the recent years. One of the driving forces behind AI’s rapid advancement has been its growing accessibility. Previously, AI’s reach was limited as it wasn’t user-friendly for those without technical expertise. Our emphasis is on crafting tools to mitigate this. For example, we’ve developed the GAN Browser, allowing users to navigate latent space and interact with a GAN model using just a PlayStation controller. This kind of accessibility has empowered even kids to engage with AI models.

AI tools like ChatGPT have gained popularity because of their user-friendly nature. Looking ahead, I believe the focus will shift towards offering broader access to these models while ensuring transparency in their operations. This includes educating users about data sources, collection processes, ensuring ethical practices, and fostering a beneficial, informative interaction with AI.


[43:55] BW: What can our listeners look forward to from Refik Anadol’s studios in the near future? Besides Dataland, is there anything else?

[44:10] CB: We’re diving deep into projects in the web3 space. Recently, we’ve activated burns for many of our collections, transforming still images into moving artworks. This has been positively received by our collectors. We have several studio projects releasing soon. In fact, this morning, it was announced we’ll showcase a collaboration with the weekend at the Sphere in Las Vegas. We’re thrilled about this, so do check out the article if you haven’t. Several other projects in the web3 space are on the horizon.


[44:56] BW: That sounds exciting. I’m especially eager for Dataland. Might have to plan a trip to LA.

[45:02] CB: We’ll be unveiling more about Dataland in the coming six to eight months. We promise it’s going to be intriguing.


[45:10] BW: It’s been great having you on Pixels and Paint, Christian. Anything else you’d like to share? And where can our listeners connect with you?

[45:27] CB: I appreciate the opportunity. My advice is simple: embrace AI, but use it ethically. For those wanting to connect, I’m on Twitter as@christianburke0, and also available on LinkedIn and Instagram under the same handle.


[45:49] BW: Thanks for joining us. I’m genuinely excited about visiting Dataland.

[45:55] CB: Thank you. We’ll be waiting.


Listen to the episode on Apple Podcasts, Spotify, Overcast, iHeart, PlayerFM, Podchaser, Boomplay, Tune-In, Podbean, Google Podcasts, Amazon Music, or on your favorite podcast platform.

Read the Show Notes

Watch on YouTube