Video Editing
(After Effects and Premiere Pro)

2019-2021

I enjoy video editing, and am very comfortable using most post-production tools. This is the first art skill that caught my academic interest. When I was much younger, I loved to film movies by myself. Until I discovered video game design, I wanted to work in film.

When I watch movies, I pay close attention to the frames; searching for masking mistakes and computer graphic imagery. I have always enjoyed learning about the process of film production, and my post-production knowledge is what drives this habit of observing frames closely.

Spooky Loop

In 2019, I was taught to loop using masks in After Effects. I was inspired by the spirit of Halloween, and chose to make a series of themed loops.

This first video is a continuous stream of fog. I decorated the backdrop with a black cloth and holiday lights, aiming to create a dark ambience. My roommate helped me by blowing vape smoke through a straw, into the cup. I took just three seconds of the film and masked it to create a seamless loop.

This first video is a pumpkin candle that always burns. My biggest challenge was masking the shadows so that the scene could respond to the movement of the light without a moving shadow. The shadow was problematic because I found it to be distracting from the rest of the scene. I’ve later used this skill to mask other problematic objects, such as a tree branch blowing in-and-out of the frame. It’s a tedious process which I tend to enjoy.

Lava Lamp

In 2021, I was assigned to consider the use of Type and Typeface for visual communication. My goal was to mimic the expereince of watching a lava lamp, simultaneously promoting the object.

Below are the other Typefaces I considered before settling on the final design, shown above.

Typeface Study:


Computer Vision

In 2020, machine learning became an integral talking point among artists and designers. I was excited by the tools becoming available for commercial use, and for the past several years I've become almost obsessed.

The first time I discovered machine generated art, I was researching for an essay. Evolution, created in 2013 by Swedish artists Johannes Heldén and Håkan Jonson, is a JavaScript poetic/musical artwork that generates evolving layers of text and ambient tones. The source files contain over a thousand 1-minute pieces composed by Heldén and the source code pulls its language from articles written by NASA and other scholarly sources. The main themes of the textual data typically adhere to physics and the nature of Earth and discovered exoplanets. I believe the intention of these choices are to entice poetic, illustrative verses while sequencing the data.

Evolution was created to be an exploration of the questions posed by Alan Turing’s test, The Imitation Game in 1951. Alan Turing is a founding creator of artificial intelligence and designed this test to observe the semblances of computation and neuroscience to determine if a machine can think. More specifically, the question is, “Can a machine do what we as thinking entities can do?” The exploration of this question would challenge a programmer to create an Artificial Intelligence (AI) with human-like responses, leading Turing to construct a textual interview between a human evaluator and both another human and the AI. This is to determine if a human can tell which responses are programmed and which are organic.

While Evolution is not intended for Turing’s test, it challenges the way most other AI's generate language which is typically a scripted output after checking through a series of response rules. Evolution constructs language by sequencing through articles with accurate definitions and periodically forming poetic phrases from generated words. Evolution supposedly mimics the artistic attributes of Johannes Heldén in language and composition; however, its generations are usually incoherent altogether. The AI can only generate new phrases, so its work is never finalized and is continuously being regenerated into something new.

In 2021, I dove deep into machine generated images and text. I love that machines can be either realistic or imaginitive, depending on the model. I have used several models, such as VQGAN and Midjourney, but the first was DeepDaze, ran using Anaconda with a console log.

Sun and Moon

This is a personal project using DeepDaze. I prompted the program to show me "The sun and moon in the cosmos". I believed myself to have no expectations about what I would generate from this title, but the image surprised me. I found the expressions on the personified objects to be cute and unlike a photograph which I would've expected over whimsical characters. One thing they both have in common are two smiling faces, one visible at the front and another covered in the back. The Sun also looks to be sleeping, with a cold blue face in front of its yellow side, while The Moon is wide awake. One thing I discovered while letting the machine generate hundreds of the images was that the objects in and around the scene seemed to dance as their colors and form shift. At this point, I realised that I generated too many images that look entirely too similar and, when compiled, had too little variation to make it interesting. To do it justice, I edited a short music video from the animation which added some sentimental value to an otherwise repetitive product.

See the video here.

Hello Computer

In 2021, I used VQGAN to complete an assignment, which prompted me to use stop animation to tell a narrative. A question that inspired my narrative was “To what extent can computers think like humans?” I aimed to test the computer's ability to perceive itself. In this exploration, I prompted it to imagine certain adjectives attributable to machines, such as a computer, desktop, hardware, circuitry, math, neural networks, and so on. The goal was to show off the intricacies of the machine, as well as its limitation to ever knowing its true form.

See the video here.

Creation and Destruction

In 2021, I was assigned to create the same video with two, opposing parts using the same clips. I first created a series of images to be used as training models photo to prompt the same shapes. I then used these re-imagined images to create the story, morphing from one to the next. The first video is to show my process of creation, and the order in which I selected my images. The second video is my process of deconstruction, creating an alternate story to the first.

See the first video here.

See the second video here.