I'm up against a rock and a hard spot. In my project I have coded to allow user to speak into a microphone and record the sound and at the same time display some graphics. My question is how can I substitute a sound buffer for the microphone and record the sound from a byte array instead of from the microphone? Normally, the user will speak into a microphone, record what he says and display some graphics that animate in sync with the voice as it is being recorded. Now, I also need to be able to allow user to input his own pre-recorded sound, play it back as it is being read in and display some graphics or have the sound read into a buffer first then from the buffer record it and also display some graphics at the same time which ever is the best or easiest way to go. I am using waveIn and waveOut API calls and would like to keep it that way as everything else in the project uses these APIs.