i know there are programs out there for midi loopback...i.e. having virtual midi devices so different midi software applications on the same computer can communicate with one another. is there software like this for audio data?
for example, here is one thing i am interested in doing:
--generate audio in Pd, but instead of sending this to a physical hardware device (soundcard) send it to some virtual device
--using an audio editing program, use this virtual device's output as the recording input
--monitor the recorded signal by sending the recorded signal to the soundcard
make sense? any responses would be greatly appreciated as soon as possible because i am supposed to come up with a solution to this problem pretty soon and i don't think i have time to write my own program to get the job done.
oh yeah, i'm using windows xp
zac hilbert~
-
Loopback devices, virtual audio devices?
-
for win32 i have ear about software named "Virtual Cable" maybe it could be an issue. If you found a better (and free) solution, let me know.
-
i'm looking for a free solution too. i don't think my original idea is going to work out because i don't have the time to implement it. maybe some of you have ideas for the problem i am trying to solve.
the problem is this:
i am helping a professor with some research. for his research he is doing a case study on 3 composers. he is asking them to record a narrative of there thoughts on the composition process as they compose. for this, the composers will be working an a mac studio workstation putting the composition together in logic. a second computer, a pc , running audacity will be used to record there sounds. when the composer reaches what they consider a significant change in the program, we are asking them to save their project to a new file (so we end up with a series of files showing the various stages of the composition). we would like a way, however, to map the timestamps of those files to the 'timeline' of their narrative.
here are a few solutions that are not exactly desirable:
a. do not stop the recording at all and make a note of what time the recording started. this means that you can calculate what time speech is taking place by adding the number of minutes and seconds (and hours) to the time at which the recording started. the problem with this is that it will yield very large files which are not very practical, especially considering that we have to transcribe these files.
b. have the composers start each segment of narration with a timestamp: "it is now 9:15 on tuesday...." as part of the research methodology, this creates problems with the flow of a more natural narrative of the compositional process.
c. have the composers save each segment of narration as a seperate time-stamped file. the problem here is that this takes more time, and could create a lot of files that would be very annoying to work with when it comes to transcribing.
c. my idea was to have, instead of just input from the microphone, 2 streams of audio input,one on the left channel and one on the right channel. on the left, would be the recorded narrative. on the right, would be an audio signal that encodes a time stamp. i was think of simply convert a number such as DDMMHHMM (day, month, hour, minute) into DTMF tones. these could then be translated back into a timestamp. an 8-tone dtmf sequence would be generated every 10 seconds or so. this way, as long as the narrative segment was longer than 10 seconds, it would contain a timestamp. the problem with this is that i have no way to mix such a signal with the input from the microphone.
any suggestions would be greatly appreciated. thanks. -
Your second "c" could be the answer, if you used Pd to record instead of Audacity.
Pd has [adc~] and [writesf~], and I made a patch that splits [writesf~] output into multiple files as it is writing them: [url=http://puredata.antibling.org/index.php?action=vthread&forum=5&topic=101 ]http://puredata.antibling.org/index.php?action=vthread&forum=5&topic=1 01
The [flite] external might make your audio time stamps more human-friendly: http://www.ling.uni-potsdam.de/~moocow/projects/pd/ -
...what you could do is just make a patch in pd to tell the time.
would be super easy.
[metro 1000] sends a bang every second
use, [select 60] to bang a minute counter, and then return the second counter to 0
...the minute counter bangs an hour counter every 60 minutes....the hour counter bangs a day counter.
pack the days, hours, and minutes into one message, and then use that message as the filename for your audio recordings.
this solution also assumes that you use pd to record, instead of audacity.
and notes on writesf~ :
it will only render a wave file....make sure you add the .wav extension to your files so other progs can read them
make sure you close the recording at the end of the file....otherwise you get a wav file of infinite length, which most sound editors won't read. -
thank you for your suggestions. for further or future experiments/research we may implement such a scheme, however i think we have resorted to a less technological solution. we are just having the participants make a verbal note on the audio file of when they are saving a new midi file. the participants in the study are not familiar with Pd (it's a shame, i know) so we don't want them running into problems trying to use a Pd patch. thanks again, though; your solutions are much appreciated