Shaping Light, Visualizing Sound, and The Generative Shift
#089 Creative Coding / Generative Arts Weekly
Hello friends,
Randomness and unpredictability increasingly shape modern life. Perhaps not always consciously, but it's undeniably present. It seems as though we’ve entered a generative age, where everything—from leadership decisions down to basic truths—feels more fluid than fixed.
Its funny that in the shadows of the shaken snow globe of a world we live in; AI and the infancy of AGI is by its very core stochastic or pseudo-stochastic. It’s strange that wielded stochastic processes can be so powerful.
Reflecting on recent advancements through the rise of Large Language Models like ChatGPT, it's striking that AI now moves beyond mere retrieval of basic facts—like Google—to seemingly grasping context. It can take words that have been written by others and come up with new permutations of those words. For example if you want a poem in the style of Emily Dickinson about the future you get:
The Future — is a Thing — unseen —
That perches — on the Air —
It sings — in Tones — not yet resolved —
But promises — are there —
It tiptoes — past — our Common Thought —
It will not — be confined —
It stirs — the Leaves — of Quiet Trees —
And beckons — from behind —
The captured essence gives AI an appearance of genuine thought and perception, something new to technology until recently.
Similarly, in visual art, we witness AI merging one stylistic approach with another to create something entirely fresh. Take for example.. if I take Van Gogh’s Starry Night, Edvard Munch’s The Scream and highlighting the boy in the prompt you get:
And it’s fascinating that it get so much of the context of the image, it has the starry night paint strokes, it has the bridge and the long strokes of The Scream, and its just the added context of a boy I’ve added that provides this iteration.
How should we engage with technology that's simultaneously nuanced yet inherently random? We can endlessly generate 100s of images that will have similarities and yet we see the landscape of each of these are different.
Obviously it is powerful and we will only see more of this coming to us in the form of entertainment for the foreseeable future. Society will likely experience a significant adaptation period as we redefine entertainment and true artistry. There is a thread that it provides tools for anyone, not just those who “made” it to make great content. So democratization of an artist’s ability to do more. In video, it is the ability to let the conception of the artist speak through AI tools which can create yet another step change of creativity for both.
I think of Srinivasa Ramanujan, a great mathematician at the turn of the 20th century who grew up in India. We know that he was gifted with an intellect that very few have had. The only way we would have ever known about it was when he was provided the opportunity to go to England to study at Cambridge that we were able to experience his work.
Now, think if a boy exploding in imagination in a developing country wanted to make a full length film but didn’t have the capital to make something happen, perhaps we will see something entirely new.
Yes we know the large movie studios will use it to make more of it. But when the essence of the human is taken out of the work it will become unfashionable. So even though these tools will get better.. the one thing that they don’t have is the inception of thought.
No AI today can make something from nothing.
For an AI to experience life, we have alot more work on the experience and emotional chaos that has to happen inside the network. Perhaps when we are generatively generating foundation models on completely synthesized datasets then maybe we can start thinking of them as sentient.
"If we hesitate to call an organic, complex system like a tree 'sentient,' how will we grapple with recognizing sentience in digital, generative systems?"
These are whispers.. ghosts in the system that emerge as hallucinations or e “digital” glitches within the system.
But per chance we get to this idea of sentient beings, even then would this be a digital species?
Would it not be simulating their own “ideas” which are completely independent from the ideas of men?
This is where we are far in the future from any proper of AI enslaving people. Yes people will enslave themselves. We see that with addictions and irresponsible use of technologies; but that has been happening since the beginning and to make the technology responsible for the irresponsibility of humans is a bit silly.
Yes controls and safety is a very real and necessary pursuit. We need stop lights to control traffic.. we need a legal systems to keep people accountable.
But the agency of any digital system at this point is just a loop.
Let me know what you think about these things as I for whatever reason am curious about other viewpoints in this wily generative world we currently live in. Well I hope you all have a great week!
Chris
Tutorials & Articles
Shaping Light
As it turns out, post-processing is great entrypoint to enhance a 3D scene with atmospheric and lighting effects, allowing for more realistic and dramatic visuals. Because these effects operate in screen space, their performance cost is decoupled from the underlying scene's complexity, making them an efficient solution for balancing performance and visual quality. At the end of the day, when we work with effects, we are still just drawing pixels on a screen.
Maxime Heckel's site has appeared in this newsletter many times. He's not only a master at explaining complex concepts clearly, but his site also features clean, interactive design. This represents how great web content should be presented. In this article, he covers GLSL, Three.js, and browser rendering.
Automating & Recording OSC with Ableton
This is an in-depth walkthrough of OSC Automator, a Max for Live device I created to enable automating and recording OSC with Ableton Live.With this tool, you can use Ableton’s powerful timeline and automation features to control any OSC-compatible software — including TouchDesigner, Synesthesia, Resolume, Unreal Engine, Processing, p5.js, and more.
It’s a tool to checkout for sure as it is a nice accessory for real time media artists.
Visualizing Sound | Are.na Editorial
“One of the great things about visualizing sound is how many of the elements and principles of design you cover …” A deep dive into how artists like Travess Smalley and Daniel Lefcourt explore translating sound and music into visual notation.
Sound and visualizing it is always a fascinating think to look through..
algovivo · Junior Rojas
“algovivo · an energy‑based formulation for soft‑bodied virtual creatures … no forces, just energy functions …” A simulation framework (C++/WASM/JS) that models soft-body creatures using energy minimization instead of conventional force physics.
Morphing | Observable (Jo Wood)
“For animations that morph between pairs of shapes we can use the Flubber library to do the hard work of calculating the intermediate shapes …” Interactive notebook demonstrating smooth shape morphing with the Flubber library.
WebGPU Fundamentals
“A set of articles to help learn WebGPU. Basics. Fundamentals · Inter‑stage Variables · Uniforms · Storage Buffers · Vertex Buffers; Textures.” A comprehensive tutorial series on modern GPU programming via WebGPU, from basics to advanced techniques.
WebGPU, though still quite new (not in like the AI new sense but in the more internet standards) it will eventually replace much of the WebGL1/2 as the newest accepted standard. Now how soon, that only comes as people use it but the following is a great resource that you should go through if interested in graphics programming for the browser.
Imagine.Art (AI Art Generator)
An alternative to places like Midjourney but have some fascinating styles that bring some interesting work to the foreground. Check it out as it also has a number of other options other than text to image such as image to video, text to video. If you want to use it and provide credits for experiments, use the following link.
Unit (Visual Programming Language)
It is heavily inspired by Live, Data Flow, Reactive, Functional and Object Oriented Programming paradigms. Formally, units are Multi Input Multi Output (MIMO) Finite State Machines (FSM). A program in Unit is represented as a Graph.
The Unit Programming Language was developed in close junction to the Unit Programming Environment, which is a Web application built for easy composition of new units. The environment is designed to feel visual and kinesthetic, giving the perception of Direct Manipulation of Live Virtual Objects. The Unit Programming experience is minimalistic, ergonomic, mobile, and can be performed through a variety of input devices, and editing can be partially expressed through Drawing, Gesture and Voice.
The design of this work; it has a bit of a sci-fi feel to it making it and reminds me of Lingdong Huang’s work as well.
Let me know what you think!
Website | Instagram | Youtube | Behance | Twitter | BuyMeACoffee