public static (IEnumerable <MMDevice> devices, MMDevice defaultDevice) GetDevices() { var defaultDevice = MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Multimedia); var devices = MMDeviceEnumerator.EnumerateDevices(DataFlow.Render, DeviceState.Active); return(devices, defaultDevice); }
public void GetDevices(IMainForm mainFormIn) { mainForm = mainFormIn; var defaultDevice = MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Multimedia); var devices = MMDeviceEnumerator.EnumerateDevices(DataFlow.Render, DeviceState.Active); mainForm.AddRecordingDevices(devices, defaultDevice); }
public static void ChangeActiveDevice(string deviceId) { var device = MMDeviceEnumerator.EnumerateDevices(DataFlow.Render).SingleOrDefault(x => x.DeviceID == deviceId); if (device != null) { ActiveDevice.Start(device); } }
static void Main(string[] args) { MMDevice dev = MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Multimedia); capture = new WasapiLoopbackCapture(); capture.Device = dev; capture.Initialize(); SoundInSource soundInSource = new SoundInSource(capture); nStream = new SingleBlockNotificationStream(soundInSource.ToSampleSource()); final = nStream.ToWaveSource(); nStream.SingleBlockRead += NStream_SingleBlockRead; soundInSource.DataAvailable += encode; trashBuf = new byte[final.WaveFormat.BytesPerSecond / 2]; Console.WriteLine($"sample rate:{capture.WaveFormat.SampleRate}"); Console.WriteLine($"bits per sample:{capture.WaveFormat.BitsPerSample }"); Console.WriteLine($"channels:{capture.WaveFormat.Channels }"); Console.WriteLine($"bytes per sample:{capture.WaveFormat.BytesPerSample }"); Console.WriteLine($"bytes per second:{capture.WaveFormat.BytesPerSecond }"); Console.WriteLine($"AudioEncoding:{capture.WaveFormat.WaveFormatTag }"); EncodingContext context = FrameEncoder.GetDefaultsContext(); context.Channels = 6; context.SampleRate = capture.WaveFormat.SampleRate; context.AudioCodingMode = AudioCodingMode.Front3Rear2; context.HasLfe = true; context.SampleFormat = A52SampleFormat.Float; enc = new FrameEncoderFloat(ref context); //_writer = new WaveWriter("test.ac3", final.WaveFormat); capture.Start(); wBuffSrc = new WriteableBufferingSource(new WaveFormat(capture.WaveFormat.SampleRate, capture.WaveFormat.BitsPerSample, capture.WaveFormat.Channels, AudioEncoding.WAVE_FORMAT_DOLBY_AC3_SPDIF), (int)capture.WaveFormat.MillisecondsToBytes(20)); w = new WasapiOut2(false, AudioClientShareMode.Shared, 20); w.Device = MMDeviceEnumerator.EnumerateDevices(DataFlow.Render, DeviceState.Active).Where(x => x.FriendlyName.Contains("Digital")).Single(); AudioClient a = AudioClient.FromMMDevice(w.Device); w.Initialize(wBuffSrc); w.Play(); Task.Run(async() => await encoderThread()); //encodeSinus(); Console.ReadLine(); System.Environment.Exit(0); }
public AudioDevicePool() { var devices = MMDeviceEnumerator.EnumerateDevices(DataFlow.All, DeviceState.Active); for (var i = 0; i < devices.GetCount(); i++) { mDeviceList.Add(devices.ItemAt(i)); } }
public MainWindow() { InitializeComponent(); mmdevicesOut = MMDeviceEnumerator.EnumerateDevices(DataFlow.Render, DeviceState.Active).ToList(); mmdevicesIn = MMDeviceEnumerator.EnumerateDevices(DataFlow.Capture, DeviceState.Active).ToList(); comboBox.ItemsSource = mmdevicesOut.ToList().Concat(mmdevicesIn); comboBox_Copy.ItemsSource = mmdevicesOut.ToList().Concat(mmdevicesIn); }
public void LogDevices() { var devices = MMDeviceEnumerator.EnumerateDevices(DataFlow.All); for (var i = 0; i < devices.Count; i++) { var device = devices[i]; Debug.Log($"Device {device.FriendlyName} ({device.DeviceID}) - {device.DataFlow}"); } }
public List <string> GetEndpointNames() { List <string> endpointNames = new List <string>(); foreach (MMDevice endpoint in MMDeviceEnumerator.EnumerateDevices(DataFlow.Render, DeviceState.Active)) { endpointNames.Add(endpoint.FriendlyName); } return(endpointNames); }
private void StartupController_StartupCompleted(object sender, EventArgs e) { if (string.IsNullOrWhiteSpace(Preferences.LoopbackDeviceID)) { ErrorController.Instance.AddErrorMessage(); } else { MMDeviceCollection deviceCollection = MMDeviceEnumerator.EnumerateDevices(DataFlow.Render, DeviceState.Active); MMDevice device = deviceCollection.FirstOrDefault(mmDevice => mmDevice.DeviceID == Preferences.LoopbackDeviceID); this.Init(device); } }
private void SetupAudioSelection() { var properties = (AudioPropertiesModel)LayerModel.Properties; Devices.Clear(); Devices.Add("Default"); // Select the proper devices and make sure they are unique Devices.AddRange(properties.DeviceType == MmDeviceType.Input ? MMDeviceEnumerator.EnumerateDevices(DataFlow.Capture, DeviceState.Active) .Select(d => d.FriendlyName).GroupBy(d => d).Select(g => g.First()) : MMDeviceEnumerator.EnumerateDevices(DataFlow.Render, DeviceState.Active) .Select(d => d.FriendlyName).GroupBy(d => d).Select(g => g.First())); }
/// <summary> /// Get the recording device. /// </summary> public void GetDevices() { if (mainForm == null) { return; } var devices = MMDeviceEnumerator.EnumerateDevices(DataFlow.Render, DeviceState.Active); if (devices.Count > 0) { var defaultDevice = MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Multimedia); mainForm.AddRecordingDevices(devices, defaultDevice); } }
public static void RecordTo(string fileName, TimeSpan time, WaveFormat format) { CaptureMode captureMode = CaptureMode.Capture; DataFlow dataFlow = captureMode == CaptureMode.Capture ? DataFlow.Capture : DataFlow.Render; var devices = MMDeviceEnumerator.EnumerateDevices(dataFlow, DeviceState.Active); var device = devices.FirstOrDefault(); using (WasapiCapture soundIn = captureMode == CaptureMode.Capture ? new WasapiCapture() : new WasapiLoopbackCapture()) { soundIn.Device = device; soundIn.Initialize(); SoundInSource soundInSource = new SoundInSource(soundIn) { FillWithZeros = false }; IWaveSource convertedSource = soundInSource .ChangeSampleRate(format.SampleRate) // sample rate .ToSampleSource() .ToWaveSource(format.BitsPerSample); //bits per sample using (convertedSource = format.Channels == 1 ? convertedSource.ToMono() : convertedSource.ToStereo()) { using (WaveWriter waveWriter = new WaveWriter(fileName, convertedSource.WaveFormat)) { soundInSource.DataAvailable += (s, e) => { byte[] buffer = new byte[convertedSource.WaveFormat.BytesPerSecond / 2]; int read; while ((read = convertedSource.Read(buffer, 0, buffer.Length)) > 0) { waveWriter.Write(buffer, 0, read); } }; soundIn.Start(); Console.WriteLine("Started recording"); Thread.Sleep(time); soundIn.Stop(); Console.WriteLine("Finished recording"); } } } }
public IEnumerable <AudioDeviceItem> GetDevices(DataFlow flow) { var devices = MMDeviceEnumerator.EnumerateDevices(flow, DeviceState.Active); int index = 0; foreach (var device in devices) { yield return(new AudioDeviceItem { Id = index, Name = device.FriendlyName }); index++; } }
/// <summary> /// First item is device id, second item is friendly name. /// </summary> /// <returns></returns> public static IEnumerable <(string, string)> GetDevices() { using var enumerator = MMDeviceEnumerator.EnumerateDevices(DataFlow.Render); foreach (var device in enumerator) { string deviceId, friendlyName; try { deviceId = device.DeviceID; friendlyName = device.FriendlyName; } catch (CSCore.Win32.Win32ComException) { continue; } yield return(deviceId, friendlyName); } }
private MMDevice GetMmDevice() { if (_properties == null) { return(null); } if (_properties.DeviceType == MmDeviceType.Input) { return(_properties.Device == "Default" ? MMDeviceEnumerator.TryGetDefaultAudioEndpoint(DataFlow.Capture, Role.Multimedia) : MMDeviceEnumerator.EnumerateDevices(DataFlow.Capture) .FirstOrDefault(d => d.FriendlyName == _properties.Device)); } return(_properties.Device == "Default" ? MMDeviceEnumerator.TryGetDefaultAudioEndpoint(DataFlow.Render, Role.Multimedia) : MMDeviceEnumerator.EnumerateDevices(DataFlow.Render) .FirstOrDefault(d => d.FriendlyName == _properties.Device)); }
public void InitializeRecording() { DataFlow dataFlow = _captureMode == CaptureMode.Capture ? DataFlow.Capture : DataFlow.Render; var devices = MMDeviceEnumerator.EnumerateDevices(dataFlow, DeviceState.Active); if (!devices.Any()) { _logger.Error("No devices found."); return; } _logger.Info("Select device:"); for (int i = 0; i < devices.Count; i++) { _logger.Info("- {0:#00}: {1}", i, devices[i].FriendlyName); } int selectedDeviceIndex = 0; _device = devices[selectedDeviceIndex]; }
private void ReloadDevices() { cbInputDev.Items.Clear(); cbOutputDev.Items.Clear(); foreach (var device in MMDeviceEnumerator.EnumerateDevices(DataFlow.All)) { if (device.DeviceState != DeviceState.Active) { continue; } if (device.DataFlow == DataFlow.Capture && !cbInputDev.Items.Contains(device)) { cbInputDev.Items.Add(device); } else if (!cbOutputDev.Items.Contains(device)) { cbOutputDev.Items.Add(device); } } }
private void PopulateDropdownLoopbackDevice() { MMDeviceCollection deviceCollection = MMDeviceEnumerator.EnumerateDevices(DataFlow.Render, DeviceState.Active); this.devices = deviceCollection.ToList(); this.devices.Insert(0, null); this.dropdownLoopbackDevice.options = new List <TMP_Dropdown.OptionData>(this.devices.Count); foreach (var device in this.devices) { this.dropdownLoopbackDevice.options.Add(new TMP_Dropdown.OptionData(device?.FriendlyName ?? "NONE")); } // Select NONE or the preferred device int selectedIndex = 0; if (!string.IsNullOrWhiteSpace(Preferences.LoopbackDeviceID)) { int preferredDeviceIndex = this.devices.FindIndex(device => device?.DeviceID == Preferences.LoopbackDeviceID); selectedIndex = preferredDeviceIndex > 0 ? preferredDeviceIndex : 0; } this.dropdownLoopbackDevice.value = selectedIndex; this.dropdownLoopbackDevice.captionText.text = this.dropdownLoopbackDevice.options[selectedIndex].text; }
public void Record(string waveFileName, string mp3FileName, bool includeMp3) { int latency = 5; int sampleRate = 320000;//44100; int bits = 32; int channels = 2; //var encoding = AudioEncoding.MpegLayer3; WaveFormat waveFormat = new WaveFormat(sampleRate, bits, channels); using (WasapiCapture capture = new WasapiLoopbackCapture(latency, waveFormat, ThreadPriority.Highest)) { Dictionary <int, MMDevice> devices = new Dictionary <int, MMDevice>(); int i = 1; foreach (MMDevice device in MMDeviceEnumerator.EnumerateDevices(DataFlow.Render)) { devices.Add(i, device); i++; } ColorConsole.WriteLine("Available devices:", ConsoleColor.Blue); foreach (var x in devices) { if (x.Value.FriendlyName == capture.Device.FriendlyName) { ColorConsole.WriteLine(x.Key + ". " + x.Value.FriendlyName + " [default]", ConsoleColor.Cyan); } else { ColorConsole.WriteLine(x.Key + ". " + x.Value.FriendlyName, ConsoleColor.DarkMagenta); } } bool optionSelected = false; while (!optionSelected) { ColorConsole.Write("Select which device above to record from 1,2,3... (Enter for default - ", ConsoleColor.White, capture.Device.FriendlyName); ColorConsole.Write("{0}", ConsoleColor.Cyan, capture.Device.FriendlyName); ColorConsole.Write(") $", ConsoleColor.White, capture.Device.FriendlyName); ConsoleKeyInfo key = Console.ReadKey(); string keystring = key.KeyChar.ToString(); if (key.Key == ConsoleKey.Enter) { optionSelected = true; } else if (int.TryParse(keystring, out var result)) { capture.Device = devices[result]; optionSelected = true; } } Console.WriteLine(); ColorConsole.WriteLine("Recording initialising", ConsoleColor.Blue); capture.Initialize(); using (WaveRecorder waveRecorder = new WaveRecorder(waveFileName, capture)) { ColorConsole.Write("Press ENTER start recording $", ConsoleColor.White); WaitForEnter(); ColorConsole.WriteLine("Writing wave to file '{0}'", ConsoleColor.Blue, waveFileName); waveRecorder.StartRecording(); ColorConsole.Write("Recording... Press ENTER to end recording $", ConsoleColor.White); WaitForEnter(); waveRecorder.EndRecording(); } ColorConsole.WriteLine("Finished recording", ConsoleColor.Blue, Path.GetFullPath(waveFileName)); ColorConsole.WriteLine("{0} written to disk.", ConsoleColor.Blue, Path.GetFullPath(waveFileName)); if (includeMp3) { ToMp3 toMp3 = new ToMp3(); ColorConsole.WriteLine("Creating mp3 from wav...", ConsoleColor.Blue, Path.GetFullPath(mp3FileName)); toMp3.ConvertFromWave(waveFileName, mp3FileName); ColorConsole.WriteLine("Finished creating mp3", ConsoleColor.Blue, Path.GetFullPath(mp3FileName)); ColorConsole.WriteLine("{0} written to disk.", ConsoleColor.Blue, Path.GetFullPath(mp3FileName)); ColorConsole.Write("Press ENTER to exit application $", ConsoleColor.White, Path.GetFullPath(mp3FileName)); WaitForEnter(); } } }
static void writeSpeakersToWav(string[] args) { const int GOOGLE_RATE = 16000; const int GOOGLE_BITS_PER_SAMPLE = 16; const int GOOGLE_CHANNELS = 1; const int EARPHONES = 5; CaptureMode captureMode = CaptureMode.LoopbackCapture; DataFlow dataFlow = captureMode == CaptureMode.Capture ? DataFlow.Capture : DataFlow.Render; var devices = MMDeviceEnumerator.EnumerateDevices(dataFlow, DeviceState.Active); foreach (var d in devices) { Console.WriteLine("- {0:#00}: {1}", d, d.FriendlyName); } var headphones = devices.First(x => x.FriendlyName.StartsWith("small")); //using (WasapiCapture capture = new WasapiLoopbackCapture()) using (WasapiCapture soundIn = captureMode == CaptureMode.Capture ? new WasapiCapture() : new WasapiLoopbackCapture()) { //if nessesary, you can choose a device here //to do so, simply set the device property of the capture to any MMDevice //to choose a device, take a look at the sample here: http://cscore.codeplex.com/ soundIn.Device = headphones; Console.WriteLine("Waiting, press any key to start"); Console.ReadKey(); //initialize the selected device for recording soundIn.Initialize(); //create a SoundSource around the the soundIn instance //this SoundSource will provide data, captured by the soundIn instance SoundInSource soundInSource = new SoundInSource(soundIn) { FillWithZeros = false }; //create a source, that converts the data provided by the //soundInSource to any other format //in this case the "Fluent"-extension methods are being used IWaveSource convertedSource = soundInSource .ChangeSampleRate(GOOGLE_RATE) // sample rate .ToSampleSource() .ToWaveSource(GOOGLE_BITS_PER_SAMPLE); //bits per sample var channels = GOOGLE_CHANNELS; //channels... using (convertedSource = channels == 1 ? convertedSource.ToMono() : convertedSource.ToStereo()) //create a wavewriter to write the data to using (WaveWriter w = new WaveWriter("dump.wav", convertedSource.WaveFormat)) { //setup an eventhandler to receive the recorded data //register an event handler for the DataAvailable event of //the soundInSource //Important: use the DataAvailable of the SoundInSource //If you use the DataAvailable event of the ISoundIn itself //the data recorded by that event might won't be available at the //soundInSource yet soundInSource.DataAvailable += (s, e) => { //read data from the converedSource //important: don't use the e.Data here //the e.Data contains the raw data provided by the //soundInSource which won't have your target format byte[] buffer = new byte[convertedSource.WaveFormat.BytesPerSecond / 2]; int read; //keep reading as long as we still get some data //if you're using such a loop, make sure that soundInSource.FillWithZeros is set to false while ((read = convertedSource.Read(buffer, 0, buffer.Length)) > 0) { //write the read data to a file // ReSharper disable once AccessToDisposedClosure w.Write(buffer, 0, read); } }; //start recording soundIn.Start(); Console.WriteLine("Started, press any key to stop"); Console.ReadKey(); //stop recording soundIn.Stop(); } } }
public void Start(TimeSpan time) { int sampleRate = 48000; int bitsPerSample = 24; MMDeviceCollection devices; while (!(devices = MMDeviceEnumerator.EnumerateDevices(DataFlow.Capture, DeviceState.Active)).Any()) { Thread.Sleep(2000); } var device = devices.FirstOrDefault(); //TODO:We have a memory leak here (soundIn should be cleared from time to time). needs to be fixed! //create a new soundIn instance using (WasapiCapture soundIn = new WasapiCapture()) { soundIn.Device = device; //initialize the soundIn instance soundIn.Initialize(); //create a SoundSource around the the soundIn instance SoundInSource soundInSource = new SoundInSource(soundIn) { FillWithZeros = false }; //create a source, that converts the data provided by the soundInSource to any other format IWaveSource convertedSource = soundInSource .ChangeSampleRate(sampleRate) // sample rate .ToSampleSource() .ToWaveSource(bitsPerSample); //bits per sample using (var stream = new MemoryStream()) { var readBufferLength = convertedSource.WaveFormat.BytesPerSecond / 2; //channels... using (convertedSource = convertedSource.ToStereo()) { //create a new wavefile using (WaveWriter waveWriter = new WaveWriter(stream, convertedSource.WaveFormat)) { //register an event handler for the DataAvailable event of the soundInSource soundInSource.DataAvailable += (s, e) => { //read data from the converedSource byte[] buffer = new byte[readBufferLength]; int read; //keep reading as long as we still get some data while ((read = convertedSource.Read(buffer, 0, buffer.Length)) > 0) { var decibelsCalibrated = (int)Math.Round(GetSoundLevel(buffer, _calibrateAdd, _calibratescale, _calibrateRange)); if (decibelsCalibrated < 0) { decibelsCalibrated = 0; } OnNoiseData?.Invoke(null, new NoiseInfoEventArgs() { Decibels = decibelsCalibrated }); //write the read data to a file waveWriter.Write(buffer, 0, read); } }; soundIn.Stopped += (e, args) => { OnStopped?.Invoke(null, null); lock (_stopLocker) Monitor.PulseAll(_stopLocker); }; var tm = new Timer(state => soundIn?.Stop(), null, time, time); //start recording soundIn.Start(); OnStarted?.Invoke(null, null); Monitor.Enter(_stopLocker); { Monitor.PulseAll(_stopLocker); Monitor.Wait(_stopLocker); } //stop recording soundIn.Stop(); } } } } }
public override void Start() { if (_started) { Stop(); } DataFlow dataFlow = (DataFlow)_speechSettings.SelectedDataFlowId; var devices = MMDeviceEnumerator.EnumerateDevices(dataFlow, DeviceState.Active); if (devices.Count - 1 < _speechSettings.InputDeviceIndex) { throw new Exception($" device Index {_speechSettings.InputDeviceIndex} is not avalibe"); } if (dataFlow == DataFlow.Render) { var wasapiFormat = _waveFormatAdapter.WaveFormatFromCurrentSettings(); _soundIn = new WasapiLoopbackCapture(100, wasapiFormat); } else { _soundIn = new WasapiCapture(); } _soundIn.Device = devices[_speechSettings.InputDeviceIndex]; _soundIn.Initialize(); var wasapiCaptureSource = new SoundInSource(_soundIn) { FillWithZeros = false }; _waveSource = wasapiCaptureSource .ChangeSampleRate(_speechSettings.SampleRateValue) // sample rate .ToSampleSource() .ToWaveSource(_speechSettings.BitsPerSampleValue); //bits per sample; if (_speechSettings.ChannelValue == 1) { _waveSource = _waveSource.ToMono(); } else { _waveSource = _waveSource.ToStereo(); } wasapiCaptureSource.DataAvailable += (s, e) => { //read data from the converedSource //important: don't use the e.Data here //the e.Data contains the raw data provided by the //soundInSource which won't have your target format byte[] buffer = new byte[_waveSource.WaveFormat.BytesPerSecond / 2]; int read; //keep reading as long as we still get some data //if you're using such a loop, make sure that soundInSource.FillWithZeros is set to false while ((read = _waveSource.Read(buffer, 0, buffer.Length)) > 0) { SendData(buffer, read); } }; _soundIn.Start(); _started = true; }
public void Record(string deviceName, string audioFilePath = @"C:\Temp\output.wav") { _timer = new Stopwatch(); _timer.Start(); // choose the capture mod CaptureMode captureMode = CaptureMode.LoopbackCapture; DataFlow dataFlow = captureMode == CaptureMode.Capture ? DataFlow.Capture : DataFlow.Render; //select the device: var devices = MMDeviceEnumerator.EnumerateDevices(dataFlow, DeviceState.Active); if (!devices.Any()) { Console.WriteLine("### No devices found."); return; } Console.WriteLine($"### Using device {deviceName}"); var device = devices.First(d => d.FriendlyName.Equals(deviceName)); //start recording //create a new soundIn instance _soundIn = captureMode == CaptureMode.Capture ? new WasapiCapture() : new WasapiLoopbackCapture(); //optional: set some properties _soundIn.Device = device; //initialize the soundIn instance _soundIn.Initialize(); //create a SoundSource around the the soundIn instance //this SoundSource will provide data, captured by the soundIn instance SoundInSource soundInSource = new SoundInSource(_soundIn) { FillWithZeros = false }; //create a source, that converts the data provided by the //soundInSource to any other format //in this case the "Fluent"-extension methods are being used _convertedSource = soundInSource .ChangeSampleRate(SampleRate) // sample rate .ToSampleSource() .ToWaveSource(BitsPerSample); //bits per sample //channels... _convertedSource = _convertedSource.ToMono(); //create a new wavefile _waveWriter = new WaveWriter(audioFilePath, _convertedSource.WaveFormat); //register an event handler for the DataAvailable event of //the soundInSource //Important: use the DataAvailable of the SoundInSource //If you use the DataAvailable event of the ISoundIn itself //the data recorded by that event might won't be available at the //soundInSource yet soundInSource.DataAvailable += (s, e) => { //read data from the converedSource //important: don't use the e.Data here //the e.Data contains the raw data provided by the //soundInSource which won't have your target format byte[] buffer = new byte[_convertedSource.WaveFormat.BytesPerSecond / 2]; int read; //keep reading as long as we still get some data //if you're using such a loop, make sure that soundInSource.FillWithZeros is set to false while ((read = _convertedSource.Read(buffer, 0, buffer.Length)) > 0) { //write the read data to a file // ReSharper disable once AccessToDisposedClosure _waveWriter.Write(buffer, 0, read); } }; //we've set everything we need -> start capturing data _soundIn.Start(); Console.WriteLine($"### RECORDING {audioFilePath}"); while (_timer.ElapsedMilliseconds / 1000 < 15 && _timer.IsRunning) { Thread.Sleep(500); } Console.WriteLine("### STOP RECORDING"); _soundIn.Stop(); _timer.Stop(); _waveWriter.Dispose(); _convertedSource.Dispose(); _soundIn.Dispose(); AudioFileCaptured?.Invoke(this, new AudioRecorderEventArgs() { AudioFilePath = audioFilePath }); }
private WasapiCapture StartListeningOnLoopback() { const int GOOGLE_RATE = 16000; const int GOOGLE_BITS_PER_SAMPLE = 16; const int GOOGLE_CHANNELS = 1; CaptureMode captureMode = _captureMode; DataFlow dataFlow = captureMode == CaptureMode.Capture ? DataFlow.Capture : DataFlow.Render; var devices = MMDeviceEnumerator.EnumerateDevices(dataFlow, DeviceState.Active); Console.WriteLine("Please select devlice:"); for (int i = 0; i < devices.Count; i++) { Console.WriteLine(i + ") " + devices[i].FriendlyName); } var deviceIndex = int.Parse(Console.ReadLine()); var headphones = devices[deviceIndex]; //using (WasapiCapture capture = new WasapiLoopbackCapture()) _soundIn = captureMode == CaptureMode.Capture ? new WasapiCapture() : new WasapiLoopbackCapture(); //if nessesary, you can choose a device here //to do so, simply set the device property of the capture to any MMDevice //to choose a device, take a look at the sample here: http://cscore.codeplex.com/ _soundIn.Device = headphones; //initialize the selected device for recording _soundIn.Initialize(); //create a SoundSource around the the soundIn instance //this SoundSource will provide data, captured by the soundIn instance _soundInSource = new SoundInSource(_soundIn) { FillWithZeros = false }; //create a source, that converts the data provided by the //soundInSource to any other format //in this case the "Fluent"-extension methods are being used _convertedSource = _soundInSource .ChangeSampleRate(GOOGLE_RATE) // sample rate .ToSampleSource() .ToWaveSource(GOOGLE_BITS_PER_SAMPLE); //bits per sample var channels = GOOGLE_CHANNELS; //channels... var src = channels == 1 ? _convertedSource.ToMono() : _convertedSource.ToStereo(); _soundInSource.DataAvailable += (sender, args) => { //read data from the converedSource //important: don't use the e.Data here //the e.Data contains the raw data provided by the //soundInSource which won't have your target format byte[] buffer = new byte[_convertedSource.WaveFormat.BytesPerSecond / 2]; int read; //keep reading as long as we still get some data //if you're using such a loop, make sure that soundInSource.FillWithZeros is set to false while ((read = src.Read(buffer, 0, buffer.Length)) > 0) { //write the read data to a file // ReSharper disable once AccessToDisposedClosure Debug.WriteLine($"Read {read} bytes"); _microphoneBuffer.Add(ByteString.CopyFrom(buffer, 0, read)); //w.Write(buffer, 0, read); } }; return(_soundIn); }
//public PlaybackStoppedDele PlaybackStopped { get; set; } //public PlaybackContiuneDele PlaybackContiune { set; get; } #endregion Properties #region Methods private void InitializePlayback(int Volume = 50, string openMethods = "waveout", string device = "扬声器") { MMDevice mMDevice; device = device.Trim(); openMethods = openMethods.Trim(); if (openMethods.IndexOf("WaveOut") != -1) { IEnumerable <WaveOutDevice> dives = WaveOutDevice.EnumerateDevices(); IEnumerable <WaveOutDevice> divselect = dives.Where(x => x.Name.IndexOf(device) != -1); WaveOutDevice div = null; if (divselect.Count() == 0) { div = dives.FirstOrDefault(); } else if (divselect.Count() == 1) { div = divselect.FirstOrDefault(); } else { Debug.Print("*****输入异常"); div = divselect.FirstOrDefault(); } if (div == null) { throw new NotSupportedException("not exist directsound device"); } _soundOut = new WaveOut() { Device = div, Latency = 100 }; //300延时有个运算溢出,怀疑是其他异常造成的 } else if (openMethods.IndexOf("WasApiOut") != -1) { var enumerator = new MMDeviceEnumerator(); IEnumerable <MMDevice> mMDevices = MMDeviceEnumerator.EnumerateDevices(DataFlow.Render).Where(x => x.DeviceState == DeviceState.Active); IEnumerable <MMDevice> dives = enumerator.EnumAudioEndpoints(DataFlow.All, DeviceState.All).Where(x => x.DeviceState == DeviceState.Active); mMDevices = mMDevices.Join(dives, x => x.FriendlyName, x => x.FriendlyName, (x, y) => x).ToArray(); mMDevice = mMDevices.Where(x => x.FriendlyName.IndexOf(device) != -1).FirstOrDefault(x => x.DeviceState == DeviceState.Active); _soundOut = new WasapiOut() { Device = mMDevice, Latency = 200 }; } else { IEnumerable <DirectSoundDevice> dives = DirectSoundDeviceEnumerator.EnumerateDevices(); var divselect = dives.Where(x => x.Description.IndexOf(device) != -1); DirectSoundDevice div = null; if (divselect.Count() == 0) { div = dives.FirstOrDefault(); } else if (divselect.Count() == 1) { div = divselect.FirstOrDefault(); } else { //Debug.Print("*****输入异常*****"); div = divselect.FirstOrDefault(); } if (div == null) { throw new NotSupportedException("not exist directsound device"); } _soundOut = new DirectSoundOut() { Device = div.Guid, Latency = 100 }; } if (_filePath.LastIndexOf(".mp3") != -1)//流异步读取,此api异步读取flac流在频繁pos时有死锁bug { Stream fs = File.OpenRead(_filePath); _waveSource = new CSCore.Codecs.MP3.Mp3MediafoundationDecoder(fs); } else if (_filePath.LastIndexOf(".flac") != -1) { Stream fs = File.OpenRead(_filePath); _waveSource = new CSCore.Codecs.FLAC.FlacFile(fs, CSCore.Codecs.FLAC.FlacPreScanMode.Default); // _waveSource = new CSCore.Codecs.FLAC.FlacFile(_filePath); } else { _waveSource = CodecFactory.Instance.GetCodec(_filePath); } _soundOut.Initialize(_waveSource); _soundOut.Volume = Volume / 100f; //_soundOut.Stopped += _soundOut_Stopped; _total = _waveSource.GetLength(); }
private void StartCapture(string fileName) { //Capture Mode CaptureMode = (CaptureMode)1; DataFlow dataFlow = CaptureMode == CaptureMode.Capture ? DataFlow.Capture : DataFlow.Render; // //Getting the audio devices from var devices = MMDeviceEnumerator.EnumerateDevices(dataFlow, DeviceState.Active); if (!devices.Any()) { MessageBox.Show("No devices found."); return; } int selectedDeviceIndex = 0; SelectedDevice = devices[selectedDeviceIndex]; if (SelectedDevice == null) { return; } if (CaptureMode == CaptureMode.Capture) { _soundIn = new WasapiCapture(); } else { _soundIn = new WasapiLoopbackCapture(); } _soundIn.Device = SelectedDevice; //Sample rate of audio int sampleRate = 16000; //bits per rate int bitsPerSample = 16; //chanels int channels = 1; //initialize the soundIn instance _soundIn.Initialize(); //create a SoundSource around the the soundIn instance //this SoundSource will provide data, captured by the soundIn instance var soundInSource = new SoundInSource(_soundIn) { FillWithZeros = false }; //create a source, that converts the data provided by the //soundInSource to any other format //in this case the "Fluent"-extension methods are being used IWaveSource convertedSource = soundInSource .ChangeSampleRate(sampleRate) // sample rate .ToSampleSource() .ToWaveSource(bitsPerSample); //bits per sample //channels=1 then we need to create mono audio convertedSource = convertedSource.ToMono(); AudioToText audioToText = new AudioToText(); audioToText.SetFolderPermission(_folderPath); //create a new wavefile waveWriter = new WaveWriter(fileName, convertedSource.WaveFormat); //register an event handler for the DataAvailable event of //the soundInSource //Important: use the DataAvailable of the SoundInSource //If you use the DataAvailable event of the ISoundIn itself //the data recorded by that event might won't be available at the //soundInSource yet soundInSource.DataAvailable += (s, e) => { //read data from the converedSource //important: don't use the e.Data here //the e.Data contains the raw data provided by the //soundInSource which won't have your target format byte[] buffer = new byte[convertedSource.WaveFormat.BytesPerSecond / 2]; int read; //keep reading as long as we still get some data //if you're using such a loop, make sure that soundInSource.FillWithZeros is set to false while ((read = convertedSource.Read(buffer, 0, buffer.Length)) > 0) { //write the read data to a file // ReSharper disable once AccessToDisposedClosure waveWriter.Write(buffer, 0, read); } }; //we've set everything we need -> start capturing data objStopWatch.Start(); _soundIn.Start(); }
public void SetEndpoint(int index) { AudioSessionManager2 audioSessionManager = AudioSessionManager2.FromMMDevice(MMDeviceEnumerator.EnumerateDevices(DataFlow.Render, DeviceState.Active)[index]); audioSessionEnumerator = audioSessionManager.GetSessionEnumerator(); }
// ReSharper disable once UnusedParameter.Local static void Main(string[] args) { //choose the capture mode Console.WriteLine("Select capturing mode:"); Console.WriteLine("- 1: Capture"); Console.WriteLine("- 2: LoopbackCapture"); CaptureMode captureMode = (CaptureMode)ReadInteger(1, 2); DataFlow dataFlow = captureMode == CaptureMode.Capture ? DataFlow.Capture : DataFlow.Render; //--- //select the device: var devices = MMDeviceEnumerator.EnumerateDevices(dataFlow, DeviceState.Active); if (!devices.Any()) { Console.WriteLine("No devices found."); return; } Console.WriteLine("Select device:"); for (int i = 0; i < devices.Count; i++) { Console.WriteLine("- {0:#00}: {1}", i, devices[i].FriendlyName); } int selectedDeviceIndex = ReadInteger(Enumerable.Range(0, devices.Count).ToArray()); var device = devices[selectedDeviceIndex]; //--- choose format Console.WriteLine("Enter sample rate:"); int sampleRate; do { sampleRate = ReadInteger(); if (sampleRate >= 100 && sampleRate <= 200000) { break; } Console.WriteLine("Must be between 1kHz and 200kHz."); } while (true); Console.WriteLine("Choose bits per sample (8, 16, 24 or 32):"); int bitsPerSample = ReadInteger(8, 16, 24, 32); //note: this sample does not support multi channel formats like surround 5.1,... //if this is required, the DmoChannelResampler class can be used Console.WriteLine("Choose number of channels (1, 2):"); int channels = ReadInteger(1, 2); //--- //start recording //create a new soundIn instance using (WasapiCapture soundIn = captureMode == CaptureMode.Capture ? new WasapiCapture() : new WasapiLoopbackCapture()) { //optional: set some properties soundIn.Device = device; //... //initialize the soundIn instance soundIn.Initialize(); //create a SoundSource around the the soundIn instance //this SoundSource will provide data, captured by the soundIn instance SoundInSource soundInSource = new SoundInSource(soundIn) { FillWithZeros = false }; //create a source, that converts the data provided by the //soundInSource to any other format //in this case the "Fluent"-extension methods are being used IWaveSource convertedSource = soundInSource .ChangeSampleRate(sampleRate) // sample rate .ToSampleSource() .ToWaveSource(bitsPerSample); //bits per sample //channels... using (convertedSource = channels == 1 ? convertedSource.ToMono() : convertedSource.ToStereo()) { //create a new wavefile using (WaveWriter waveWriter = new WaveWriter("out.wav", convertedSource.WaveFormat)) { //register an event handler for the DataAvailable event of //the soundInSource //Important: use the DataAvailable of the SoundInSource //If you use the DataAvailable event of the ISoundIn itself //the data recorded by that event might won't be available at the //soundInSource yet soundInSource.DataAvailable += (s, e) => { //read data from the converedSource //important: don't use the e.Data here //the e.Data contains the raw data provided by the //soundInSource which won't have your target format byte[] buffer = new byte[convertedSource.WaveFormat.BytesPerSecond / 2]; int read; //keep reading as long as we still get some data //if you're using such a loop, make sure that soundInSource.FillWithZeros is set to false while ((read = convertedSource.Read(buffer, 0, buffer.Length)) > 0) { //write the read data to a file // ReSharper disable once AccessToDisposedClosure waveWriter.Write(buffer, 0, read); } }; //we've set everything we need -> start capturing data soundIn.Start(); Console.WriteLine("Capturing started ... press any key to stop."); Console.ReadKey(); soundIn.Stop(); } } } Process.Start("out.wav"); }
// ReSharper disable once UnusedParameter.Local static void Main(string[] args) { CaptureMode captureMode; if (Boolean.Parse(ConfigurationManager.AppSettings["defaultToLoopback"])) { captureMode = CaptureMode.LoopbackCapture; } else { Console.WriteLine("Select capturing mode:"); Console.WriteLine("- 1: Capture"); Console.WriteLine("- 2: LoopbackCapture"); captureMode = (CaptureMode)ReadInteger(1, 2); } DataFlow dataFlow = captureMode == CaptureMode.Capture ? DataFlow.Capture : DataFlow.Render; var devices = MMDeviceEnumerator.EnumerateDevices(dataFlow, DeviceState.Active); if (!devices.Any()) { Console.WriteLine("No devices found."); return; } MMDevice device; if (devices.Count == 1) { device = devices[0]; } else { Console.WriteLine("Select device:"); for (int i = 0; i < devices.Count; i++) { Console.WriteLine("- {0:#00}: {1}", i, devices[i].FriendlyName); } int selectedDeviceIndex = ReadInteger(Enumerable.Range(0, devices.Count).ToArray()); device = devices[selectedDeviceIndex]; } int sampleRate = Int32.Parse(ConfigurationManager.AppSettings["sampleRate"]); int bitsPerSample = Int32.Parse(ConfigurationManager.AppSettings["bitsPerSample"]); int channels = 1; //create a new soundIn instance using (WasapiCapture soundIn = captureMode == CaptureMode.Capture ? new WasapiCapture() : new WasapiLoopbackCapture()) { //optional: set some properties soundIn.Device = device; //... //initialize the soundIn instance soundIn.Initialize(); //create a SoundSource around the the soundIn instance //this SoundSource will provide data, captured by the soundIn instance SoundInSource soundInSource = new SoundInSource(soundIn) { FillWithZeros = false }; //create a source, that converts the data provided by the //soundInSource to any other format //in this case the "Fluent"-extension methods are being used IWaveSource convertedSource = soundInSource .ChangeSampleRate(sampleRate) // sample rate .ToSampleSource() .ToWaveSource(bitsPerSample); //bits per sample //channels... using (convertedSource = channels == 1 ? convertedSource.ToMono() : convertedSource.ToStereo()) { //create a new wavefile var fileName = "out-" + DateTime.UtcNow.ToString("yyyy-MM-ddTHH-mm-ss") + ".wav"; using (WaveWriter waveWriter = new WaveWriter(fileName, convertedSource.WaveFormat)) { //register an event handler for the DataAvailable event of //the soundInSource //Important: use the DataAvailable of the SoundInSource //If you use the DataAvailable event of the ISoundIn itself //the data recorded by that event might won't be available at the //soundInSource yet soundInSource.DataAvailable += (s, e) => { //read data from the converedSource //important: don't use the e.Data here //the e.Data contains the raw data provided by the //soundInSource which won't have your target format byte[] buffer = new byte[convertedSource.WaveFormat.BytesPerSecond / 2]; int read; //keep reading as long as we still get some data //if you're using such a loop, make sure that soundInSource.FillWithZeros is set to false while ((read = convertedSource.Read(buffer, 0, buffer.Length)) > 0) { //write the read data to a file // ReSharper disable once AccessToDisposedClosure waveWriter.Write(buffer, 0, read); } }; //we've set everything we need -> start capturing data soundIn.Start(); Console.WriteLine("Capturing started ... press any key to stop."); Console.ReadKey(); soundIn.Stop(); } } } }