About • Visual Arts • Performing Arts
NuMedia • Emerging Tech.
Early adoption of new technology's always run in my family. My father
was the founding CTO of Pepsi and helped integrate 3DCGI and computer
graphic designs into the comapny's portfolio of brands in the mid
1980's. He would continue to chase developments in IT solutions into
the .com boom of the late 1990's, passing down his love of these
technologies and the wherewithal to actively teach oneself newer tech
to myself and my siblings, effectively since birth in the new
millennium.
In the mid-2010's, I started leveraging restorative AI in memes I'd
post online (more info in the "New Media" page), many of which would go
viral over the coming years and pave the way for future developments
and opportunities. By around 2016, I had helped hone AI models with the
community of content creators I associated with at the time to isolate
vocals from music tracks, without use of traditional methods of phase
inverting studio instrumentals. Not too much more time had passed, and
we had already further perfected the technology to demix audio
recordings down even further. My own developments over the course of
the early 2020's have substantially grown into workflows that lead to
my ability to demix a track down to separate instrumental frequencies,
including separation of individual voices from a multitrack with
several voices hard-layered into the mix (or backup singers in a song),
as well as the means to upmix as simple as monoural audio upwards of
5.1 surround sound.
My expertise leveraging restorative AI solutions in memes in my youth directly led to a spot in
OpenAI's
beta testing program in 2018, where I was first granted playtesting
rights to the state-of-the-art generative AI suite that includes the
likes of GPT and Dall-E. Under Mattel's employment, I've also
contributed training to their enterprise's internal forks of such LLM
tools, helping fine-tune models based on their near century's portfolio
of brands. Several projects from Barbie, American Girl, as well as the
global relaunch of Barney the Dinosaur have heavily utitlized LLM
datasets I've contributed to.
Concurrent with traditional art, I've also grown to leverage gen AI to
further accelerate and enchance workflows, particularly developing
style transfer (image-to-image) models for use in expediting ink &
paint procedures in cel animation.
From my early days of testing GPT in its infancy, I quickly realized
the importance of the written word as a new programming language for
gen. AI. Once ChatGPT was first made widely available to the general
public, I was one of the trailblazers for the concept of "prompt
engineering", leveraging the English language to explicitly instruct
models down to the most initmate minutia for all sorts of freelance
bases.
When I worked at Dr. Seuss, I helped leverage gen AI with all sorts of
company communications, helping enforce the master brand's signature
rhyming scheme with character dialogue, public documentation and
marketing communications. Dr. Seuss was my premiere vehicle to pilot my
vast self-taught knowledge of gen AI before moving onto Mattel and
other enterprises, being able to pinpoint the company's signature
aesthetics and language down to an exact science, which their internal
models were able to replicate down to a T.