About Visual ArtsProduct Design
NuMedia
Emerging Tech.Photography


Early adoption of new technology has been a family tradition. My father, the founding CTO of PepsiCo, integrated 3D CGI and computer graphics into the company’s brand portfolio in the mid-1980s. His fascination with technology continued through the .com boom of the late 1990s, and he passed down this passion for innovation and a drive to self-teach to me and my siblings as we grew up in the new millennium.

By the mid-2010s, I was already diving into AI and digital media, beginning with meme culture and online music communities (see more on the "New Media" page). My early projects included working with a community of creators to develop specialized tools to reverse-engineer proprietary video game music formats, making them compatible with Digital Audio Workstations (DAWs). This effort allowed us to isolate and remix game music, adapting it into DAW-friendly formats and opening up creative possibilities that traditional software couldn’t offer. These projects went viral, helping me establish a foundation in digital audio manipulation and AI-enhanced tools.

Around 2016, I began experimenting with AI models to isolate vocals from music tracks without phase inversion—a technique traditionally used in studio recordings. Building on this, we further refined the technology to demix audio down to specific instrumental frequencies, even separating individual voices within layered tracks. In recent years, I expanded these workflows, creating techniques to upmix monaural audio to complex formats like 5.1 surround sound.

My work with AI-driven tools soon earned me a place in OpenAI’s beta testing program in 2018, where I was granted early access to their generative AI suite, including GPT and DALL-E. Later, while at Mattel, I contributed to training their internal Generative and Agentic AI models, fine-tuning datasets for legacy brands like Barbie, American Girl, and Barney the Dinosaur. These AI-powered datasets enhanced brand-specific language and storytelling, supporting various projects across the company’s extensive portfolio.

Generative AI also became a powerful tool in my artistic work, especially for creating style-transfer models that sped up the ink-and-paint processes in cel animation. During this period, I began pioneering “prompt engineering”—a method of using language as programming to produce highly detailed, controlled outputs from generative models.

While working at Dr. Seuss Enterprises, I applied this expertise to refine brand language using generative AI, creating models that replicated Dr. Seuss’s unique rhyming style and playful tone across character dialogue and public communications. This role allowed me to refine my AI knowledge in creative branding, an approach I later carried throughout my professional life.

In recent years, my work with GenAI has further expanded into music production, where I’ve prototyped and practiced a neuralized reinvention of the soundfont. Adapting tools initially developed for voice cloning, I developed new instrumental models off mere seconds of training data, advancing generative AI capabilities to create unique audio textures and voices in music production. Such deep-sampling can best be heard in the short films independently prodcued by myself, as well as my friends and business partners in a similarly "indie" scene.

During the turbulently volatile economy of Donald Trump's second administration, I trained a suite of custom GPT-based agents for general use in the Self Storage industry, as well as additional fine-tuning for specific companies and regional chains. Treating this suite, as well as public agents powered by Swivl, as their own virtual wave of hires, I helped focus on streamlining as much daily deliverables as possible on the question of accounting and "virtual call centers" on external websites. Our main goal is to automate as much as possible, freeing up human staff to properly thrive at work.