About • Visual Arts • Performing Arts
NuMedia • Emerging Tech.
Early adoption of new technology's always run in my family. My father
was the founding CTO of Pepsi and helped integrate 3DCGI and computer
graphic designs into the comapny's portfolio of brands in the mid
1980's, as well as now industry standard CAD procedures in mass production.
He would continue to chase developments in IT solutions into
the .com boom of the late 1990's, passing down his love of these
technologies and the wherewithal to actively teach oneself newer tech
to myself and my siblings, effectively since birth in the new
millennium.
In the mid-2010's, I started leveraging restorative AI in memes I'd
post online (more info in the "New Media" page), many of which would go
viral over the coming years and pave the way for future developments
and opportunities. By around 2016, I had helped hone AI models with the
community of content creators I associated with at the time to isolate
vocals from music tracks, without use of traditional methods of phase
inverting studio instrumentals. Not too much more time had passed, and
we had already further perfected the technology to demix audio
recordings down even further. My own developments over the course of
the early 2020's have substantially grown into workflows that lead to
my ability to demix a track down to separate instrumental frequencies,
including separation of individual voices from a multitrack with
several voices hard-layered into the mix (or backup singers in a song),
as well as the means to upmix as simple as monoural audio upwards of
5.1 surround sound.
My expertise leveraging restorative AI solutions in memes in my youth directly led to a spot in
OpenAI's
beta testing program in 2018, where I was first granted playtesting
rights to the state-of-the-art generative AI suite that includes the
likes of GPT and Dall-E.
In direct response to the then-volatile status of a coastal Self Storage
company's workforce, I'd trained a suite of custom GPT-based agents on
the company's entire backend and strictures, treating the internal GPT
suite, as well as the public-facing Swivl chatbot, as its own virtual
wave of hires.
the main focus being to streamline as much of the menial deliverables as far as daily accounting and facility maintenance, as well as “virtual call centers” on public websites, and a general assistant to help enforce and maintain a consistent brand tone across all external communications, such as with tenants, prospectives, and third party collaborators such as storage unit liquidators. The former use cases, have also involved heavy reevaluation and restructuring of the chain of internal sites that comprise the company’s backend; as the company is still surprisingly small after almost 20 years, our main goal is to automate as much as possible, freeing up human staff to properly thrive at work. Another huge ripple effect from my own AI push with this franchise has also been that, since so much of this involves effectively training machines on all the same beats you’d teach living people, actual internal training materials have been updating for the first time since 2016—ALL of the training materials have been notoriously outdated across the entire workforce beforehand. It could never be understated how much hats everyone wears already. For some reason, the AI suite I’ve been developing has sincerely been so innovative across all departments, freeing up busywork from people in the call centers, to middle management on-site, literally all the way up to the executive board, including the company’s dedicated IT department, which for some reason has been ridiculously slow to adopt anything. Note: I DO NOT work for their IT staff.
Under Mattel's employment, I've also
contributed training to their enterprise's internal forks of such LLM
tools, helping fine-tune models based on their near century's portfolio
of brands. Several projects from Barbie, American Girl, as well as the
global relaunches of Barney the Dinosaur and Angelina Ballerina have heavily utitlized LLM
datasets I've contributed to.
Concurrent with traditional art, I've also grown to leverage gen AI to
further accelerate and enchance workflows, particularly developing
style transfer (image-to-image) models for use in expediting ink &
paint procedures in cel animation.
From my early days of testing GPT in its infancy, I quickly realized
the importance of the written word as a new programming language for
gen. AI. Once ChatGPT was first made widely available to the general
public, I was one of the trailblazers for the concept of "prompt
engineering", leveraging the English language to explicitly instruct
models down to the most initmate minutia for all sorts of freelance
bases.
When I worked at Dr. Seuss, I helped leverage gen AI with all sorts of
company communications, helping enforce the master brand's signature
rhyming scheme with character dialogue, public documentation and
marketing communications. Dr. Seuss was my premiere vehicle to pilot my
vast self-taught knowledge of gen AI before moving onto Mattel and
other enterprises, being able to pinpoint the company's signature
aesthetics and language down to an exact science, which their internal
models were able to replicate down to a T.