…once upon a time, video killed the radiostar…tomorrow, code will kill the videostar…
in the meantime, no dj without a vj…
Sounds a lot like Symon Reynolds criticism of contemporary performance arts:
I have to say that, as much as I like implementing and mixing visuals and sound, I also agree on Autechre’s criticism of forcing visuals on something that could actually also carry itself through sound only presentation.
Contemporary shows often are so aggressively oversaturated with big batteries of lights, videos and lasers, which is also why I often don’t like to go to festivals where it is somehow expected to get punched in your face with blinking lights
I’m 100% with you on this.
In a certain way, for musicians, it’s a shame that sight is humans’ strongest sense.
But blending visuals and sounds will give birth to a new type of artists, that’s for sure.
I can only enjoy these new immersive concepts as long as I understand the necessity to use transmedial approaches.
Good examples would be artists like Alva Noto, Byetone or 404.Zero, all of them have an audiovisual language where visuals and sound go hand in hand. But most immersive works are nothing but ornamentation, two totally independent layers that have no connection to each other than maybe audioreactivity, and even this is often not the case.
On the other hand this is what sells best. I have to say from my personal experience: Festivals etc. want to see “blinking lights” as much as they prefer to invite artists that either work with “wait for the drop music” or extremely harsh EDM that encourages the audience to buy more drinks to endure these extreme sonic terretories
I can’t help but feel like we use visuals as a crutch to aid in music’s comprehension. Most songs I was introduced to in the 80s I understand mostly through relation to their videos, which are still vivid in my mind. Those before and after I have my own, internal understanding of that tends to be more fuzzy and less “visual”.
But maybe it’s a “crutch” in the way a calculator is a crutch? Sure I don’t know my times tables anymore, but neither do I experience that as some kind of profound loss. The important thing is getting to connect with music. If that’s aided directly through visuals or indirectly via my own invented head canon, either way is good.
And like the calculator, having connections aided for me by visuals allows me to appreciate a lot more music than if I were connecting through the rewarding but time consuming process of careful critical listening. Feels like there has to be room for both.
and there is a pretty big difference between ‘musician’ and ‘performer’ - despite the obvious overlap.
you really notice this with the way instruments are marketed - things like portability, ‘performance controls’, etc are mostly aimed at people who might use this gear in a live show of some kind.
I can vividly remember when music videos first became a thing and people were having these very same conversations.
My take - as someone who works in the visual side of the industry is that, as always, it all depends on your goal - if having a visual component to your work is important then at least knowing what the landscape is like is worth the time/effort. Otherwise just walk away.
There really shouldn’t ever be a feeling of ‘fear of missing out’ for musicians when it comes to visuals, especially given the time/effort it takes to get a a handle on these applications.
Since this is the depository of Touchdesigner info, I thought it ironic that on the Touchdesigner FB page someone linked this YouTube account as popular beginner tutorials.
The extra “K” is for “Knows Touchdesigner”.
Touch Designer is good to understand in relation to its Mom - SideFX Houdini. Houdini is a tool used in VFX for film, but Houdini’s core strength lies in simulation - of particle effects, smoke, fire, fluids, that type of thing. So when you see the Incredible Hulk in a Marvel film smashing up a road and bits of gravel are flying everywhere - that’s probably Houdini.
A couple of folks split of from SideFX and created their own company, comically called ‘derivative’ - and if you were to compare the two softwares - you 'd find similar nomenclature. However a key difference between Houdini and TD is - Houdini projects are sent to a render farm, whereas TD is ‘real-time’.
Render farm meaning, with the high level of realism and detail of polygon count required by production level films, files are sent to a render farm over network to be rapidly rendered and returned to the production house. TD has this spirit of ‘VFX’ and ‘simulation’ at its heart, but however, tweaks the production overheads so everything can run in real-time.
And the benefits of real-time are as everybody states above - interactivity for installations, reactivity for live performance etc. An artist also gains a wysiwyg approach to creation - avoiding a render pipeline in a 3D context and instead, being able to explore and see their results ‘live’.
I think in terms of relevance, TD has been around a while now, emerging in a PC only format, requiring quite a heavy system, to then offering a Mac binary, and also polishing the system requirements overtime, such that anyone could basically dive in and have a play now on a modern MacBook Air. This means also that there’s been time for tutorials to have been amassed, a community to develop around the software, and also changes like the program moving on originally from its own ‘t-script’ code snippet language, to python for function input and relational scripting between nodes.
Further, since the primary developers and company are based in Canada, derivative have regularly participated in for instance, the Mutek festival, offering workshops and training, and seeing artists who use the software participate and perform live in venues such as the SAT.
Ultimately, TD is an extremely powerful and flexible tool that can pretty much handle anything you throw at it, but it is complicated, however the large amount of tutorials available now mean the software is well supported by a large and active community, primarily couched within the live audio-visual performance and interactive installation scenes.
I’ll add a couple more things.
Both Houdini and TD are what you call procedural programming environments. You hear this term bandied about a lot, like, in No Man’s Sky planets being ‘procedurally generated’. It’s an interesting term, which I think can get somewhat confused with the term ‘generative’, but ultimately, procedural is just a different programming type, which if u really broke it down can simply mean ‘cause and effect’.
This means an artist creates a chain, or in TD, a ‘network’, and really, net results are the product of what comes before, so changing a node in the chain changes the final output.
This is quite simple, and artist-friendly, if you like. As opposed to say, object oriented programming, where one creates sets of variables, objects, arrays, classes and their associated methods, and then ‘calls’ the objects in question to display based on the input method data. TD still uses these sorts of programmatic elements, such as a data lists, but again in this mapped out procedural way.
I should say I’m a newb, I might not be describing the above in the most accurate way but it’s the point in describing the differences. Likewise, node-based, or visual programming as opposed to text-based.
I personally find TD to be extremely taxing on the mind. It’s possibly just the way I’m wired, and I’ve always felt similarly about max. But there’s an aspect to that procedural generation which can really balloon out - such that I personally begin to feel like a ‘processor’, keeping track of all the settings, digging in to parameter boxes to see what is set, and what is going where. Oddly, visual programming languages are supposed to be more suited to artists, but I always found something like Processing to make much more sense and be more logical.
However TD’s advanatage there is giving you way more bang for your buck, and higher fidelity, for less programmatic or algebraic knowledge as you would require in C++ or Java.
Lately I’ve found Notch to be quite a useable visual based node software. There’s less code-based snippet requirements (if any), enough flexibility built into the system that you can gravitate towards your own style, and it can give you much of that VFX style simulation results but perhaps being even less cognitively demanding than TD. Notch’s only real barrier to entry though is it’s pricing (if you want hi res results), because it seems more geared to the pro touring market, than say the indie AV scene.
TD noob here. i’ve dabbled with processing & shaders on and off last years, doubting to make the plunge into TD. been bingewatching tutorials last days and TD feels quite natural to me so far. i’ve been doubting between this and game engines like Godot/unreal. My only gripe, a license is (undertandably) not cheap and its not really clear how this works regarding professional installations. Does the client also needs to buy a license for a remote installation?
I’m coming up on some time off from work. Need to watch this.
It would be really interesting to get a detailed rundown by anyone who made a full liveset with TD and hardware alone. I mean as in playing with a bunch of musicgear and at the same time playing TD. On screen, with midi or with controllers.
Im finally almost finished with the first working system for a show in 2 month but got some shit to solve until its stable for action. Most modulation are auto with lfos and stuff but to make it really weird I need to change some things manually as it is now.
Built a custom component for control and tried to have it as simple as possible but its still a risk to loose focus from the audio.
So I’ve started working on this. I put my tracks together with an MPC and synth, and then use multiple audio ins on my computer for analysis combined with midi in to get tempo and sync data. I’m trying to put each one of my tracks in one big base component that takes audio analysis and sync chop data in and then outputs top data.
I decided I’m going to send touch CC’s and program changes on different tracks on my MPC, and then mute and unmute different combinations to avoid having to having to directly control anything in touch live. It doesn’t change that much in reality, and neither does my music really, so I can keep it pretty simple.
Cool!
Care to explain this in more detail? Do you switch between different systems for every track?
Maybe it wasnt necessary but I started a new thread about TD for folks that knows what it is.
https://www.elektronauts.com/t/touchdesigner-for-live-visuals
anyone knows if I’m allowed to upload videos from non-commercial version of TD to non monetized youtube channel? or is it considered commercial and I’d need a commercial license anyways?
This persons work is nice , I think minimal music works nicely in techy style glitch visuals
I first got into TouchDesigner via Amon Tobin’s ISAM project. Here’s an article on Derivative’s own site discussing the application built in TD, the projection mapping et cetera, as well as some video of it in action.
There’s a decent book from Pakt publishing which goes in depth into Max/MSP in conjunction with TD that is pretty good, if you like learning from books. pppanik has some good tutorials on youtube for creating obligatory generative pieces
My friend is learning it heavily and sometimes I help to render some bits (because it’s heavy on CPU/GPU).
I used to work with such environments like VVVV and Processing in the past. From that perspective I find TD much friendlier to a user (no need to dive into coding, the UI is better, the primitives are not that utilitarian / low level). The only annoying part sometimes is esoteric third mouse button click to navigate, but it’s manageable
wdym? like around the grid?