audiorecord怎么释放_Android 开发 AudioRecord音频录制
前言
Android SDK 提供了兩套音頻采集的API,分別是:MediaRecorder 和 AudioRecord,前者是一個(gè)更加上層一點(diǎn)的API,它可以直接把手機(jī)麥克風(fēng)錄入的音頻數(shù)據(jù)進(jìn)行編碼壓縮(如AMR、MP3等)并存成文件,而后者則更接近底層,能夠更加自由靈活地控制,可以得到原始的一幀幀PCM音頻數(shù)據(jù)。
實(shí)現(xiàn)流程
獲取權(quán)限
初始化獲取每一幀流的Size
初始化音頻錄制AudioRecord
開始錄制與保存錄制音頻文件
停止錄制
給音頻文件添加頭部信息,并且轉(zhuǎn)換格式成wav
釋放AudioRecord,錄制流程完畢
獲取權(quán)限
如果是Android5.0以上,以上3個(gè)權(quán)限需要?jiǎng)討B(tài)授權(quán)
初始化獲取每一幀流的Size
privateInteger mRecordBufferSize;private voidinitMinBufferSize(){//獲取每一幀的字節(jié)流大小
mRecordBufferSize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_IN_MONO
, AudioFormat.ENCODING_PCM_16BIT);
}
第一個(gè)參數(shù)sampleRateInHz 采樣率(赫茲),方法注釋里有說明
只能在4000到192000的范圍內(nèi)取值
在AudioFormat類里
public static final int SAMPLE_RATE_HZ_MIN = 4000; 最小4000
public static final int SAMPLE_RATE_HZ_MAX = 192000; 最大192000
第二個(gè)參數(shù)channelConfig 聲道配置 描述音頻聲道的配置,例如左聲道/右聲道/前聲道/后聲道。
在AudioFormat類錄public static final int CHANNEL_IN_LEFT = 0x4;//左聲道
public static final int CHANNEL_IN_RIGHT = 0x8;//右聲道
public static final int CHANNEL_IN_FRONT = 0x10;//前聲道
public static final int CHANNEL_IN_BACK = 0x20;//后聲道
public static final int CHANNEL_IN_LEFT_PROCESSED = 0x40;
public static final int CHANNEL_IN_RIGHT_PROCESSED = 0x80;
public static final int CHANNEL_IN_FRONT_PROCESSED = 0x100;
public static final int CHANNEL_IN_BACK_PROCESSED = 0x200;
public static final int CHANNEL_IN_PRESSURE = 0x400;
public static final int CHANNEL_IN_X_AXIS = 0x800;
public static final int CHANNEL_IN_Y_AXIS = 0x1000;
public static final int CHANNEL_IN_Z_AXIS = 0x2000;
public static final int CHANNEL_IN_VOICE_UPLINK = 0x4000;
public static final int CHANNEL_IN_VOICE_DNLINK = 0x8000;
public static final int CHANNEL_IN_MONO = CHANNEL_IN_FRONT;//單聲道
public static final int CHANNEL_IN_STEREO = (CHANNEL_IN_LEFT | CHANNEL_IN_RIGHT);//立體聲道(左右聲道)
第三個(gè)參數(shù)audioFormat 音頻格式 表示音頻數(shù)據(jù)的格式。
注意!一般的手機(jī)設(shè)備可能只支持 16位PCM編碼,如果其他的都會(huì)報(bào)錯(cuò)為壞值.
public static final int ENCODING_PCM_16BIT = 2; //16位PCM編碼
public static final int ENCODING_PCM_8BIT = 3; //8位PCM編碼
public static final int ENCODING_PCM_FLOAT = 4; //4位PCM編碼
public static final int ENCODING_AC3 = 5;
public static final int ENCODING_E_AC3 = 6;
public static final int ENCODING_DTS = 7;
public static final int ENCODING_DTS_HD = 8;
public static final int ENCODING_MP3 = 9; //MP3編碼 此格式可能會(huì)因?yàn)椴辉O(shè)備不支持報(bào)錯(cuò)
public static final int ENCODING_AAC_LC = 10;
public static final int ENCODING_AAC_HE_V1 = 11;
public static final int ENCODING_AAC_HE_V2 = 12;
初始化音頻錄制AudioRecord
privateAudioRecord mAudioRecord;private voidinitAudioRecord(){
mAudioRecord= newAudioRecord(MediaRecorder.AudioSource.MIC
,8000, AudioFormat.CHANNEL_IN_MONO
, AudioFormat.ENCODING_PCM_16BIT
, mRecordBufferSize);
}
第一個(gè)參數(shù)audioSource 音頻源?? 這里選擇使用麥克風(fēng):MediaRecorder.AudioSource.MIC
第二個(gè)參數(shù)sampleRateInHz 采樣率(赫茲)? 與前面初始化獲取每一幀流的Size保持一致
第三個(gè)參數(shù)channelConfig 聲道配置 描述音頻聲道的配置,例如左聲道/右聲道/前聲道/后聲道。?? 與前面初始化獲取每一幀流的Size保持一致
第四個(gè)參數(shù)audioFormat 音頻格式? 表示音頻數(shù)據(jù)的格式。? 與前面初始化獲取每一幀流的Size保持一致
第五個(gè)參數(shù)緩存區(qū)大小,就是上面我們配置的AudioRecord.getMinBufferSize
開始錄制與保存錄制音頻文件
private booleanmWhetherRecord;privateFile pcmFile;private voidstartRecord(){
pcmFile= new File(AudioRecordActivity.this.getExternalCacheDir().getPath(),"audioRecord.pcm");
mWhetherRecord= true;new Thread(newRunnable() {
@Overridepublic voidrun() {
mAudioRecord.startRecording();//開始錄制
FileOutputStream fileOutputStream= null;try{
fileOutputStream= newFileOutputStream(pcmFile);byte[] bytes = new byte[mRecordBufferSize];while(mWhetherRecord){
mAudioRecord.read(bytes,0, bytes.length);//讀取流
fileOutputStream.write(bytes);
fileOutputStream.flush();
}
Log.e(TAG,"run: 暫停錄制");
mAudioRecord.stop();//停止錄制
fileOutputStream.flush();
fileOutputStream.close();
addHeadData();//添加音頻頭部信息并且轉(zhuǎn)成wav格式
} catch(FileNotFoundException e) {
e.printStackTrace();
mAudioRecord.stop();
}catch(IOException e) {
e.printStackTrace();
}
}
}).start();
}
這里說明一下為什么用布爾值,來關(guān)閉錄制.有些小伙伴會(huì)發(fā)現(xiàn)AudioRecord是可以獲取到錄制狀態(tài)的.那么肯定有人會(huì)用狀態(tài)來判斷while是否還需要處理流.這種是錯(cuò)誤的做法.因?yàn)镸IC屬于硬件層任何硬件的東西都是異步的而且會(huì)有很大的延時(shí).所以回調(diào)的狀態(tài)也是有延時(shí)的,有時(shí)候流沒了,但是狀態(tài)還是顯示為正在錄制.
停止錄制
就是調(diào)用mAudioRecord.stop();方法來停止錄制,但是因?yàn)槲以谏厦娴谋4媪骱笞隽苏{(diào)用停止視頻錄制,所以我這里只需要切換布爾值就可以關(guān)閉音頻錄制
private voidstopRecord(){
mWhetherRecord= false;
}
給音頻文件添加頭部信息,并且轉(zhuǎn)換格式成wav
音頻錄制完成后,這個(gè)時(shí)候去存儲(chǔ)目錄找到音頻文件部分,會(huì)提示無法播放文件.其實(shí)是因?yàn)闆]有加入音頻頭部信息.一般通過麥克風(fēng)采集的錄音數(shù)據(jù)都是PCM格式的,即不包含頭部信息,播放器無法知道音頻采樣率、位寬等參數(shù),導(dǎo)致無法播放,顯然是非常不方便的。pcm轉(zhuǎn)換成wav,我們只需要在pcm的文件起始位置加上至少44個(gè)字節(jié)的WAV頭信息即可。
偏移地址 命名 內(nèi)容
00-03 ChunkId "RIFF"
04-07 ChunkSize 下個(gè)地址開始到文件尾的總字節(jié)數(shù)(此Chunk的數(shù)據(jù)大小)
08-11 fccType "WAVE"
12-15 SubChunkId1 ? "fmt ",最后一位空格。
16-19 SubChunkSize1 一般為16,表示fmt Chunk的數(shù)據(jù)塊大小為16字節(jié)
20-21 FormatTag 1:表示是PCM 編碼
22-23 Channels ? 聲道數(shù),單聲道為1,雙聲道為2
24-27 SamplesPerSec ? 采樣率
28-31 BytesPerSec 碼率 :采樣率 * 采樣位數(shù) * 聲道個(gè)數(shù),bytePerSecond = sampleRate * (bitsPerSample / 8) * channels
32-33 BlockAlign 每次采樣的大小:位寬*聲道數(shù)/8
34-35 BitsPerSample 位寬
36-39 SubChunkId2 "data"
40-43 SubChunkSize2 ?? 音頻數(shù)據(jù)的長(zhǎng)度
44-... data 音頻數(shù)據(jù)
private voidaddHeadData(){
pcmFile= new File(AudioRecordActivity.this.getExternalCacheDir().getPath(),"audioRecord.pcm");
handlerWavFile= new File(AudioRecordActivity.this.getExternalCacheDir().getPath(),"audioRecord_handler.wav");
PcmToWavUtil pcmToWavUtil= new PcmToWavUtil(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
pcmToWavUtil.pcmToWav(pcmFile.toString(),handlerWavFile.toString());
}
寫入頭部信息的工具類
注意輸入File和輸出File不能同一個(gè),因?yàn)闆]有做緩存.
public classPcmToWavUtil {private static final String TAG = "PcmToWavUtil";/*** 緩存的音頻大小*/
private intmBufferSize;/*** 采樣率*/
private intmSampleRate;/*** 聲道數(shù)*/
private intmChannel;/***@paramsampleRate sample rate、采樣率
*@paramchannel channel、聲道
*@paramencoding Audio data format、音頻格式*/PcmToWavUtil(int sampleRate, int channel, intencoding) {this.mSampleRate =sampleRate;this.mChannel =channel;this.mBufferSize =AudioRecord.getMinBufferSize(mSampleRate, mChannel, encoding);
}/*** pcm文件轉(zhuǎn)wav文件
*
*@paraminFilename 源文件路徑
*@paramoutFilename 目標(biāo)文件路徑*/
public voidpcmToWav(String inFilename, String outFilename) {
FileInputStream in;
FileOutputStream out;long totalAudioLen;//總錄音長(zhǎng)度
long totalDataLen;//總數(shù)據(jù)長(zhǎng)度
long longSampleRate =mSampleRate;int channels = mChannel == AudioFormat.CHANNEL_IN_MONO ? 1 : 2;long byteRate = 16 * mSampleRate * channels / 8;byte[] data = new byte[mBufferSize];try{
in= newFileInputStream(inFilename);
out= newFileOutputStream(outFilename);
totalAudioLen=in.getChannel().size();
totalDataLen= totalAudioLen + 36;
writeWaveFileHeader(out, totalAudioLen, totalDataLen,
longSampleRate, channels, byteRate);while (in.read(data) != -1) {
out.write(data);
out.flush();
}
Log.e(TAG,"pcmToWav: 停止處理");
in.close();
out.close();
}catch(IOException e) {
e.printStackTrace();
}
}/*** 加入wav文件頭*/
private void writeWaveFileHeader(FileOutputStream out, longtotalAudioLen,long totalDataLen, long longSampleRate, int channels, longbyteRate)throwsIOException {byte[] header = new byte[44];//RIFF/WAVE header
header[0] = 'R';
header[1] = 'I';
header[2] = 'F';
header[3] = 'F';
header[4] = (byte) (totalDataLen & 0xff);
header[5] = (byte) ((totalDataLen >> 8) & 0xff);
header[6] = (byte) ((totalDataLen >> 16) & 0xff);
header[7] = (byte) ((totalDataLen >> 24) & 0xff);//WAVE
header[8] = 'W';
header[9] = 'A';
header[10] = 'V';
header[11] = 'E';//'fmt ' chunk
header[12] = 'f';
header[13] = 'm';
header[14] = 't';
header[15] = ' ';//4 bytes: size of 'fmt ' chunk
header[16] = 16;
header[17] = 0;
header[18] = 0;
header[19] = 0;//format = 1
header[20] = 1;
header[21] = 0;
header[22] = (byte) channels;
header[23] = 0;
header[24] = (byte) (longSampleRate & 0xff);
header[25] = (byte) ((longSampleRate >> 8) & 0xff);
header[26] = (byte) ((longSampleRate >> 16) & 0xff);
header[27] = (byte) ((longSampleRate >> 24) & 0xff);
header[28] = (byte) (byteRate & 0xff);
header[29] = (byte) ((byteRate >> 8) & 0xff);
header[30] = (byte) ((byteRate >> 16) & 0xff);
header[31] = (byte) ((byteRate >> 24) & 0xff);//block align
header[32] = (byte) (2 * 16 / 8);
header[33] = 0;//bits per sample
header[34] = 16;
header[35] = 0;//data
header[36] = 'd';
header[37] = 'a';
header[38] = 't';
header[39] = 'a';
header[40] = (byte) (totalAudioLen & 0xff);
header[41] = (byte) ((totalAudioLen >> 8) & 0xff);
header[42] = (byte) ((totalAudioLen >> 16) & 0xff);
header[43] = (byte) ((totalAudioLen >> 24) & 0xff);
out.write(header,0, 44);
}
}
釋放AudioRecord,錄制流程完畢
調(diào)用release()方法釋放資源
mAudioRecord.release();
最后你就可以在指定目錄下找到音頻文件播放了
最后介紹下其他API
獲取AudioRecord初始化狀態(tài)
public intgetState() {returnmState;
}
注意!這里是初始化狀態(tài),不是錄制狀態(tài),它只會(huì)返回2個(gè)狀態(tài)
AudioRecord#STATE_INITIALIZED? //已經(jīng)初始化
AudioRecord#STATE_UNINITIALIZED? //沒有初始化
獲取AudioRecord錄制狀態(tài)
public intgetRecordingState() {synchronized(mRecordingStateLock) {returnmRecordingState;
}
}
返回錄制狀態(tài),它只返回2個(gè)狀態(tài)
AudioRecord#RECORDSTATE_STOPPED??? //停止錄制
AudioRecord#RECORDSTATE_RECORDING??? //正在錄制
總結(jié)
以上是生活随笔為你收集整理的audiorecord怎么释放_Android 开发 AudioRecord音频录制的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 除外承保是什么意思 除外承保是什么
- 下一篇: cpu主要由运算器和什么组成