Re: [ecasound] Nama/Ecasound (was: Re: ecasound and lua?)

From: Philipp Überbacher <hollunder@email-addr-hidden>
Date: Tue Jul 20 2010 - 00:02:28 EEST

Excerpts from Joel Roth's message of 2010-07-19 22:04:59 +0200:
> On Mon, Jul 19, 2010 at 03:14:42PM +0200, Philipp ??berbacher wrote:
> > Excerpts from Joel Roth's message of 2010-07-19 13:39:29 +0200:
>
> > > Nama's track caching is not realtime. However since there
> > > is no audio output, it generally takes only a fraction
> > > of the actual audio duration.
> >
> > So it's faster than realtime, which is good :)
> > Do understand it correctly that you use the same ecasound chain setup
> > but with file instead of audio output?
>
> No, a separate routine generates the setup in that case.
> Nama's routing is the most complex part of the program,
> and has had three separate incarnations: (1) logic
> implemented in program code, (2) routing by sets of rules
> applied to sets of tracks (3) routing by an intermediate
> graph that can be traversed, rewritten, and annotated
>
> I never thought I would take that third step, but it turns
> out to make it possible to implement inserts (running a track
> signal through an external program or hardware effects box)

Wow, this sounds quite complex, especially for something that looks very
simple. No idea how I'll do this yet.

> > > That would be a simple-minded solution to get a revised waveform
> > > output after a change in effect parameters
> >
> > Yep, probably not optimal. Applying effect + write new file + create
> > peakfile only to update a single parameter that changed slightly sounds
> > like huge overkill.
> >
> > > If that function ran in a separate process, the interruption
> > > to the user would be less. However one would have to ask
> > > who is willing to write and debug the code to do this. :-)
> >
> > I'm in the fortunate position to have no experience with threads
> > whatsoever, so this thought doesn't trouble me at all :)
>
> I don't have said experience either. Multiple processes are much
> easier to handle.

No idea about that either. Ignorance is bliss? :)

> > > And why do you need to see what reverb, for example, does
> > > to the waveform? Or volume? I guess looking for overs...
> > > although you won't lose much if sound levels are okay
> > > and you have a limiter at the end of your mastering effects chain.
> >
> > Yes, looking for clipping would be an obvious application. I plan to
> > first concentrate on jack, which means 32bit float. I heard that
> > clipping isn't possible there, but I must admit that I don't fully
> > understand it.
>
> If I express a signal as 1.89838274 x 10^n I can express any
> reasonable bigness of number.

Then it's likely the normalize to +/- 1 part that's confusing me. Well,
one day I'll understand.

> > I talked to Remon about it, and T does reflect only gain changes in the
> > waveform view (gain, gain curves, fades). It looks really nice in T.
>
> That sounds reasonable to do.
>
> > I briefly tried to figure out how A does it. It seems it doesn't
> > even reflect that much, no gain curves, no fades, to track gain change,
> > only clip normalization. I don't know what happened to the crossfades,
> > they used to appear automatically but I couldn't find them anymore, so
> > no idea what happens there.
> >
> > > Perhaps Lauecasound will turn out to be the best environment
> > > for implementing such a feature.
> >
> > Don't remind me that I need to find a name at some point :)
> > I think the simplest form, the way A does it, is enough for most cases.
> > It can become surprisingly complicated, especially when you want fancy
> > stuff like zoom, proper alignment to a timeline and reflection of
> > effects. At some point I want at least a simple, static waveform for
> > orientation purposes.
> >
> > > > > > ....Most [Ecasound] envelopes seem
> > > > > > to be linear, which is fine in some cases but not others, however, that
> > > > > > one generic linear envelope that lets you specify any number of points
> > > > > > looks interesting after a quick glance.
> > > > >
> > > > > Yes, that is what Nama uses to provide fades.
> > > > > It's also possible to schedule effect parameter changes
> > > > > directly, which Nama uses for fade-out at transport stop.
> > > >
> > > > Scheduling this stuff is something I wonder (see mail to Kai). So how do
> > > > you schedule it, sample based or using some timer internal to ecasound?
> > >
> > > I use the Linux high-resolution kernel timer and an event
> > > framework the lets me schedule timer events. The timer event
> > > triggers a callback that updates the effect parameter, and
> > > Viola! envelope control without using Ecasound's envelope
> > > functions. I do have some question about the accuracy of
> > > this approach, for example, whether indeterminate behavior
> > > occurs if another process has the CPU when the timer reaches
> > > the trigger point.
> >
> > I know nothing about timers, but using an external one does sound
> > suboptimal to me. I think I'd want to have the thing sample aligned, but
> > no idea how to do it and it's far away anyway.
>
> For sample-aligned, I think Ecasound's envelopes would be
> the best.

Maybe, I've yet to understand those.

> > > > > Nice to hear that in your estimation, there is a place for
> > > > > other DAW software than Ardour. :-)
> > > >
> > > > There sure is, A can be surpassed in many areas. Reliability and
> > > > usability for sure, even features to a degree. I'm not alone with that
> > > > estimation, there's at least one guy who switched from A to T for his
> > > > orchestra work. He's working with Remon, T's author, to make T into a
> > > > professional DAW, and in at least performance and usability it does
> > > > surpass A already. It's a relatively special case, but it's a case :)
> > >
> > > Great to hear that. I was wondering when you said that
> > > Traverso is subject to crashing.
> >
> > It's still a work in progress, but apparently getting there. Remon just
> > shocked me when he said he plans to release this summer. I want to have
> > my proof of concept before his next release, so I have little time :)
> > Git is currently unstable, but with changes to routing and other quite
> > substantial things it's not surprising.
>
> Yay, I'm done with routing forever. :-) Well until my next
> urge to clean up. (Why am I more interested in cleaning up
> code than a physical room?)

I don't know, but you're not alone.

> > Here are two screenshots, showing a new feature called 'childview'
> > (proper name pending), which is simply about showing a subset of tracks.
> > http://traverso-daw.org/screenies/transport/idea19.png
> > http://traverso-daw.org/screenies/transport/idea18.png
>
> Looks well done.
>
> > Other things being worked on, in parallel to the internal routing, is
> > a track manager to achieve said routing. Here's a preliminary
> > screenshot: http://traverso-daw.org/screenies/routing/trackmanager.png
>
> That's cool, too. Nama has logic to handle signals that
> go to multiple nodes, or that converge on a single node.
>
> However there is not currently the ability to arbitrarily
> connect nodes at the user level. The auxiliary send function
> is limited to one send per track.
>
> Although adding feature has caused the possibilities to
> mushroom, I try for Nama to always do the right thing,
> and to provide warnings if it can't.
>
> > Also qwerty control and a sheetview to more easily manage hundreds of
> > tracks is pretty much finished. I don't want to advertise here, just
> > want to say that T is promising.
>
> That's great. QWERTY control is a must for getting work
> done. And nice to know that there is another alternative
> DAW able to handle hundreds of tracks.
>
> Ecasound is okay with that amount of complexity, however
> with Nama, I had to do some profiling (using the magical
> Devel::NYTProf) to discover where Nama was making too
> many calls.
>
> Another optimization was to evaluate each
> user input to see if Nama needs to generate a new
> setup. Except at the level of adding or
> removing effects, that's how Nama responds to
> change: It generates a new setup from scratch.
>
> To be able to read the setup file for debugging, the setup
> file includes only routing directives. Effects are applied
> after loading the setup using Ecasound IAM commands.
>
> But I shouldn't be saying too much; it is boasting :-)
> and I don't want to give you any preconceived notions
> that might limit your creative thinking. :-) :-)

This sounds all pretty crazy to me :)
I think I'll try to understand ecasound first and then see what I can
come up with. I spend most active time today on IUP packaging, which is
the wrong way around again, I planned to worry about that later..

> > > > > Nama's further development of automation, if it is to
> > > > > happen, will be driven by specific user needs and proposals.
> > > > > At the moment, I don't think I'm likely to conquer new frontiers
> > > > > without some prodding. :-)
> > > >
> > > > From what I gathered from Juliens mails, you respond well to prodding :)
> > >
> > > It's been great to discuss features and implementation
> > > details with him.
> > >
> > > If I can see a reasonable way forward, my curiosity often
> > > leads me to take the next few steps. :-)
> >
> > Something is different between us here, my curiosity usually only leads
> > me to the point where I understand it in principle, no further.
>
> Yes, it is different. I find some concepts hard to grasp,
> that I need to do something practical to make sense of them.
> Also, it's amazing how something conceptually easy can
> be very tough when the rubber actually hits the road.
>
> > > > I wondered about audio feedback one or two times, and I agree that
> > > > would need to be done in a really clever way. The main issues I see are:
> > > > a) input errors
> > >
> > > A three-step process to input, verify and execute might help
> > > with this.
> >
> > Interesting idea. Verify by means of TTS?
>
> I was thinking of prerecorded audio clips.

Ah, ok, guess this depends on what you understand when you say
'verify'. I thought about reading back the typed line, which, with all
the possible combinations, would be a lot of audio data, except when you
cut it into chunks and put them back together as needed.

> > > > b) holding control responses and production audio apart
> > >
> > > Can you explain what you mean in a bit more detail?
> >
> > Assuming you get both 'feedback' and 'production' audio the same way,
> > possibly at the same time, the two might clash. We have only two ears,
> > and usually use them together (maybe there's a hint here?). I
> > can imagine that you might have a hard time hearing the 'feedback' while
> > the music is playing, or vice versa. It might be even hard to say what
> > is more important at any given time.
> > The problem comes down to using the ears for two things at the same time.
> > It probably can be done cleverly, there are a few things I can imagine
> > to workaround the problem, but I don't see an obvious best solution.
>
> Another case where some practical experiments would help.
>
> Best,
>
> Joel

Definitely. Maybe Julien already has some experience with audio feedback
in general. There are lots of things to try in practice. Left/right,
ducking (dj-style), using a certain frequency band, feedback only while
transport isn't rolling, whatever else that might come to mind..

-- 
Regards,
Philipp
--
"Wir stehen selbst enttäuscht und sehn betroffen / Den Vorhang zu und alle Fragen offen." Bertolt Brecht, Der gute Mensch von Sezuan
------------------------------------------------------------------------------
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
_______________________________________________
Ecasound-list mailing list
Ecasound-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ecasound-list
Received on Tue Jul 20 00:15:05 2010

This archive was generated by hypermail 2.1.8 : Tue Jul 20 2010 - 00:15:05 EEST