Visualizing Sound, Lorenz in Blender, and Painterly Shaders
#081 Creative Coding / Generative Arts Weekly
Don’t let the daily routine kill your creativity. Remember who you were before you got that job. - Morr Meroz
Good morning!
Again, I still want to thank you for all of the support you provide.
As you will find, this edition is a bit more 3D modeling and Blender heavy. This isn’t always the case but it has been one thing that I have really wanted to get more proficient in. Since I haven’t spent near enough time I’ve been taking some time working with Houdini and Blender to see what it will take to do the same in other frameworks like Three.js and OpenSCAD.
Granted my ultimate goal is to finish a project that will involve some generative sculptures and mixed media that I will eventually get down once I have it right.
But just for fun, here is just one of my many little experiments…
Birdcall Identification using Deep Learning
In the wooded area behind our house, we've been trying to identify the type of owl we've been hearing over the past year. We often hear it in our dining room while playing Wingspan. Last week, during my research, I discovered a neural network called BirdNet that can identify bird sounds using a dataset from the Cornell Lab of Ornithology.
I placed an H3-VR sound recorder outside overnight to capture audio throughout the night. After collecting 10 GB of sound data, I analyzed it using BirdNet to identify the bird species that might be present in our backyard.
Unfortunately, I didn't capture any owl sounds that evening. However, I did record blue jays and yellow-bellied sapsuckers. In this section of the spectrogram, you can clearly see their distinctive sound signatures.
Well if you zoom in a little closer.
The model (BirdNet) itself definitely needs more fine tuning but I do know from the sound that the bird makes.
Since I have the pipeline together manually I still need to write a few scripts that will extract the points from the audio and display them simultaneously. That way I can really verify some of the sounds from a larger set.
Anyhow, that might be more than you wanted to know about but I find the intersection between nature, technology and science to be such a beautiful place to be in.
Peace!
Chris Ried
Tutorials & Articles
From Lamps to Lungs
Nervous System's presentation from CDFAM Computational Design Symposium in NYC is an exploration of their collaborations with scientists in the realm of 3D-printed organs. They show how science inspires their art and design work which then feeds back into their scientific practice. The cycle continues as their work on organs contributes to their recent large scale public artworks and vise-versa.
Nervous System has been an inspiration for many years. Their aesthetic blends parametric and computational design with influences from nature. In this presentation, they apply their skills in a functional way, which is definitely worth watching. I'd recommend checking out their Patreon for more insights into their work.
PabloNet
The debate about whether internet-fitted AIs can be creative always seemed besides the point to me. Making art is hard. My view is that art is about surfacing the inner world, and only in part about skill. It’s unfortunate that art selects so strongly for skill. Can we decorrelate the two? It seems so. Cheap interpolative*creativity used by 8 billion non-artists surely surfaces new views of the world.
More AI driven, but being able to mix the creative and AI together to inspire alternative experiences. Do check them out!
Interactive Motion Design using Unreal Engine 5.5
unreal engine's new motion design tools are incredibly fun, especially paired with everything else the engine has to offer. here's a little interactive toy created with some cloner/effector systems, using some of the new features in 5.5 (color! torii!), as well as a fully procedural music system running in-engine.
a primary effector moves around with the left stick and triggers, and switches between different modes when face buttons are pressed. the main cloner system also tilts with the right stick. svg import is really easy—i brought in a bunch of svg icons, extruded geometry, and dropped them into a cloner rotating around the scene, all very quickly.
Max 8.0 Released
Max 9 introduces significant enhancements across multiple domains of the software. In the Jitter realm, it introduces specialized geometry structures for efficient vertex manipulation and new jit.fx objects for effects and compositing, along with improved UI objects for interface building. The audio capabilities have been expanded with ABL objects that interface with Ableton Live Suite devices, a new loudness~ object for EBU R 128 standard measurements, and jweb~ for integrating web browser audio into Max. The coding environment now features the modern JavaScript V8 engine, enhanced codebox functionality, and a REPL interface. Patching workflows have been improved with features like the Patcher List View and Illustration Mode, while user interaction has been streamlined through integrated OSC support, modern HID input handling, and enhanced parameter connection capabilities. The update also includes improved documentation and a modernized preferences interface with syntax coloring for better readability.
Max is a powerful node-based visual and audio platform for creating computational experiences. Despite its versatility, it remains underappreciated outside of sound design circles. While primarily used by audio experimenters, Max offers extensive capabilities for visual artists as well.
Lorenz Attractor in Python and Blender
Lorenz Attractor in Blender - Visualizing the Lorenz Attractor in 3D using Blender & Python
A quick tutorial on how to produce a visualization of the Lorenz Attractor using Python code in Blender.
If you are brand new to Blender I’d recommend the Beginners to Blender course by Blender Guru here. But I’ve also attached the code that he uses so you can try it out and further iterate:
import bpy | |
## Class for the lorenz | |
class Lorenz: | |
def __init__(self, sceneRef, objName, color, initX, initY, initZ): | |
self.X, self.Y, self.Z = initX, initY, initZ | |
self.dt = 0.0025 | |
self.a, self.b, self.c = 10, 48, 2.76 | |
self.color = color | |
self.objName = objName | |
self.sceneRef = sceneRef | |
def Step(self): | |
self.X = self.X + (self.dt * self.a * (self.Y - self.X)) | |
self.Y = self.Y + (self.dt * (self.X * (self.b - self.Z) - self.Y)) | |
self.Z = self.Z + (self.dt * (self.X * self.Y - self.c * self.Z)) | |
def Generate(self): | |
#Define number of points to be used | |
numPoints = 20000 | |
self.curve = bpy.data.curves.new("LorenzCurve", type='CURVE') | |
self.curve.dimensions = '3D' | |
self.curve.bevel_depth = 0.25 | |
# Create spline poly | |
attractorPoly = self.curve.splines.new('POLY') | |
attractorPoly.points.add(numPoints-1) | |
for i in range(0,numPoints): | |
attractorPoly.points[i].co = (self.X, self.Y, self.Z, 1) | |
self.Step() | |
self.body = bpy.data.objects.new('curve', self.curve) | |
self.body.name = self.objName | |
self.sceneRef.collection.objects.link(self.body) | |
scene = bpy.context.scene | |
newColor = (1.0, 0.4, 0.0, 1.0) | |
attractor1 = Lorenz(scene, "attractor2", newColor, 0.1, 0.0, 0.0) | |
attractor1.Generate() |
Definitely check it out… It provides alot of perspective on how the Blender coding libraries work.
4D Objects In Blender
Making fractals in Blender is possible since Blender 3.3. I find the 4D Julia shape very interesting and beautiful, so here's how to make it too.
The following is more of a tool that has been created by the creator of the youtube channel Bad Normals.
On Painterly Shaders
Writing a shader that can reproduce the look and feel of aquarelle, watercolor, or gouache to obtain a more painterly output for my WebGL scenes has always been a long-term goal of mine. Inspired by the work of very talented 3D artists such as @simonxxoo or @arpeegee, the contrast between paintings and the added dimension allowed by 3D renderers was always very appealing to me. On top of that, my recent work with stylized shaders to mimic the hand-drawn Moebius art style emphasized not only that obtaining such stylized output was possible but also that post-processing was more likely than not the key to emulating any artistic style.
Blend Illustrative Compositing
Although some of this is more design driven, if you are a beginner understanding the render process is critical to realize how 3D rendering can produce interesting and unique ways that produce images such as creating a watercolor like scene in Blender.
Pushing the Frontiers of Audio Generation
Speech is central to human connection. It helps people around the world exchange information and ideas, express emotions and create mutual understanding. As our technology built for generating natural, dynamic voices continues to improve, we’re unlocking richer, more engaging digital experiences.
Over the past few years, we’ve been pushing the frontiers of audio generation, developing models that can create high quality, natural speech from a range of inputs, like text, tempo controls and particular voices. This technology powers single-speaker audio in many Google products and experiments — including Gemini Live, Project Astra, Journey Voices and YouTube’s auto dubbing — and is helping people around the world interact with more natural, conversational and intuitive digital assistants and AI tools.
This is just an example of the continual progression of audio generation. I find it worthwhile to pay attention.
Always thankful for your support!
Website | Instagram | Youtube | Behance | Twitter | BuyMeACoffee