I wish that this were a progress report instead of a status update, but so far we haven't raised enough to begin data collection with Mechanical Turk. We have had a paper accepted for publication and we are trying to get in to the Google Compute Engine to save expenses for the huge Amazon bill for asking people who claim to have good pronunciation and reading skill to record exemplars. The problem is that the number of such exemplars needs to be relatively large. For those of you familiar with the TalkNicer demo, this is the "exemplar sufficiency index" and it needs to meet a certain threshold for at least 5,000 words of instructional material before I feel comfortable committing to an expensive data collection effort.
So in summary, please donate more, or if you have already donated, please ask multiple people to at least match your donation. It will be worth it.
Update: How much more do we need? About $4,000 based on the preliminary per-phoneme exemplar sufficiency index including English homographs and Mechanical Turk performance expectation estimates. Also updated: cmusphinx.sourceforge.net/wiki/pronunciation_evaluation
Further update: I am very sorry about delaying Troy's posts here (it was due to the WebRTC and related questions) but they have been available at e.g. cmusphinx.sourceforge.net/2012/08/gsoc-2012-pronunciation-evaluation-troy-project-conclusions
Saturday, November 17, 2012
Sunday, August 26, 2012
Ronanki: GSoC 2012 Pronunciation Evaluation: Summary and Conclusions
This article briefly summarizes the implementation of GSoC 2012 Pronunciation Evaluation project.
Primarily, I started with sphinx forced-alignment and obtained the spectral matching acoustic scores, duration at phone, word level using WSJ models. After that I tried concentrating mainly on two things. They are edit-distance neighbor phones decoding and Scoring routines for both Text-dependent and Text-independent systems as a part of GSoC 2012 project.
Edit-distance Neighbor phones decoding:
1. Primarily started with single-phone decoder and then explored three-phones decoder, word decoder and complete phrase decoder by providing neighbor phones as alternate to the expected phone.
2. The decoding results shown that both word level and phrase level decoding using JFGF are almost same.
3. This method helps to detect the mispronunciations at phone level and to detect homographs as well if the percentage of error in decoding can be reduced.
Scoring Routines:
Text-dependent:
This method is based on exemplars for each phrase. Initially, mean acoustic score, mean duration along with deviations are calculated for each of the phone in the phrase based on exemplar recordings. Now, given the test recording, each phone in the phrase is then compared with exemplar statistics. After that, z-scores are calculated and then normalized scores are calculated based on maximum and minimum of z-scores from exemplar recordings. All phone scores are aggregated to get word score and then all word scores are aggregated with POS weight to get complete phrase score.
Text-independent:
This method is based on predetermined statistics built from any corpus. Here, in this project, I used TIMIT corpus to build statistics for each phone based on its position (begin/middle/end) in the word. Given any random test file, each phone acoustic score, duration is compared with corresponding phone statistics based on contextual information. The scoring method is same as to that of Text-dependent system.
Demo:
Please try our demo @ http://talknicer.net/~ronanki/test/ and help us by giving the feedback.
Documentation and codes:
All codes are uploaded in cmusphinx svn @
http://sourceforge.net/p/cmusphinx/code/HEAD/tree/branches/speecheval/ronanki/ and raw documentation of the project can be found here.
Conclusions:
The pronunciation evaluation system really helps all users to improve their pronunciation by trying multiple times and it lets you correct your-self by giving necessary feedback at phone, word level. I couldn't complete some of the things I have mentioned earlier during the project. But I hope I can keep my contributions to this project in future also.
This summer has been a great experience to me. Google Summer of code 2012 has finally ended. I would like to thank my mentor James Salsman for his time, continuous efforts and help. The way he motivated really helped me to focus on the project all the time. I would also like to thank my friend Troy Lee, Nickolay, Biksha Raj for their help and comments during the project time.
Primarily, I started with sphinx forced-alignment and obtained the spectral matching acoustic scores, duration at phone, word level using WSJ models. After that I tried concentrating mainly on two things. They are edit-distance neighbor phones decoding and Scoring routines for both Text-dependent and Text-independent systems as a part of GSoC 2012 project.
Edit-distance Neighbor phones decoding:
1. Primarily started with single-phone decoder and then explored three-phones decoder, word decoder and complete phrase decoder by providing neighbor phones as alternate to the expected phone.
2. The decoding results shown that both word level and phrase level decoding using JFGF are almost same.
3. This method helps to detect the mispronunciations at phone level and to detect homographs as well if the percentage of error in decoding can be reduced.
Scoring Routines:
Text-dependent:
This method is based on exemplars for each phrase. Initially, mean acoustic score, mean duration along with deviations are calculated for each of the phone in the phrase based on exemplar recordings. Now, given the test recording, each phone in the phrase is then compared with exemplar statistics. After that, z-scores are calculated and then normalized scores are calculated based on maximum and minimum of z-scores from exemplar recordings. All phone scores are aggregated to get word score and then all word scores are aggregated with POS weight to get complete phrase score.
Text-independent:
This method is based on predetermined statistics built from any corpus. Here, in this project, I used TIMIT corpus to build statistics for each phone based on its position (begin/middle/end) in the word. Given any random test file, each phone acoustic score, duration is compared with corresponding phone statistics based on contextual information. The scoring method is same as to that of Text-dependent system.
Demo:
Please try our demo @ http://talknicer.net/~ronanki/test/ and help us by giving the feedback.
Documentation and codes:
All codes are uploaded in cmusphinx svn @
http://sourceforge.net/p/cmusphinx/code/HEAD/tree/branches/speecheval/ronanki/ and raw documentation of the project can be found here.
Conclusions:
The pronunciation evaluation system really helps all users to improve their pronunciation by trying multiple times and it lets you correct your-self by giving necessary feedback at phone, word level. I couldn't complete some of the things I have mentioned earlier during the project. But I hope I can keep my contributions to this project in future also.
This summer has been a great experience to me. Google Summer of code 2012 has finally ended. I would like to thank my mentor James Salsman for his time, continuous efforts and help. The way he motivated really helped me to focus on the project all the time. I would also like to thank my friend Troy Lee, Nickolay, Biksha Raj for their help and comments during the project time.
Wednesday, August 22, 2012
GSoC 2012: #Troy Pronunciation Evaluation Week 7 Status
Last week, I was still working on the data collection website.
Thank Robert (butler1970@gmail.com) so much for trying out the website and listed the issues he encountered on this page: https://www.evernote.com/pub/butler1970/cmusphinx#b=11634bf8-7be9-479f-a20e-6fa1e54b322b&n=398dc728-b3f0-4ceb-8ccf-89295b98a6d7
Issue #1: The under construction of Student Page
The first stage of the website to collect exemplar recordings, thus the student page is not implemented at that time.
Issue #2: The inconvenient birthdate control
The birthdate control is now replaced with the standard HTML5 <input type="datetime"> control. Due to the datetime input control is a new element in HTML5, currently only Chrome, Safari and Opera support the popup date selection. On other browsers, which have on support yet, the control will simply be displayed as an input box. The user can just type in the date and the background script will check whether the format is correct or not.
Issue #3: The incorrect error message "Invalid date format" on the additional information update page
After digging into the source code to find the problem for several hours, the bug lies in the order of invoking mysql related functions. The processing steps in the additional information update page is as follows:
a) client side post the user input information to the server;
b) server side first using mysql_escape_string function to preprocess the user information to ensure the security of later mysql queries;
c) check the format of each field including the date time format, whether the user inputs a valid date;
d) update the mysql database with the new information.
As only in step d) the mysql sever action is needed, I thus put the database connection code behind step c), without knowing the mysql_escape_string function also requires mysql database connection. In the previous implementation, the mysql_escape_string returns empty string thus leads to invalid date format.
Secondly, the exemplar recording page is update with following features:
1) Automatically move to the next utterance after the user record and playback the current recording;
2) Adding extra navigation control for recording phrase selection;
3) When the user opens the exemplar recording page, the first un-recorded utterance will be set to the first one shown the user.
4) Connection the enable and disable of recording and playback buttons of the player with the database information, i.e. if the user has recorded the phrase before, both the recording and playback buttons are enabled, otherwise only recording is allowed.
The third major part done in last week is the student page which is previously left empty.
For the student page, users now can also practice their pronunciation by recording the phrases in the database and also listening to the exemplar recordings in the system. The features are:
1) Full recording and playback functionalities as exemplar recording;
2) When navigating to each phrase, randomly maximum 5 exemplar recordings from the system are retrieved from the database and listed on the page to help the students.
3) Additionally, to put some exemplar recordings in the system, I have to manually transcribe several sentences and put the recordings into the system for use. After there are many people contributing to the exemplar recordings, I don't need to do manually transcription any more.
For this week, two major tasks to be done: integration with Ronanki's evaluation scripts and mid-term report.
Tuesday, August 21, 2012
Ronanki: GSoC 2012 Pronunciation Evaluation Final week Report
Here comes my final report for Pronunciation Evaluation project. The demo system is little bit modified. You can give a try and test the text-independent system @ http://talknicer.net/~ronanki/test
Last week, I tested the system with both Indian accent and US accent. For US accent, I don't have any mis-pronunciation data. I just tested with SA1, SA2 (TIMIT) sentences. For Indian accent, I prepared a data with both correct pronunciations and mis-pronunciations and can be downloaded at http://talknicer.net/~ronanki/Database.tar.tgz
The results are provided at http://talknicer.net/~ronanki/results/. The scripts for evaluating the database are uploaded in svn project folder. Phonological features are provided in svn, but couldn't built models with it in time.
The project and the required scripts can be downloaded from
http://sourceforge.net/p/cmusphinx/code/HEAD/tree/branches/speecheval/ronanki/
Please go through README files provided in each folder.
http://sourceforge.net/p/cmusphinx/code/HEAD/tree/branches/speecheval/ronanki/
Please go through README files provided in each folder.
Finally, I would like to thank my mentor James Salsaman, Nickolay, Biksha Raj and rest of the community for helping me all the time. I hope that I keep contributing to this project over the time.
Ronanki: GSoC 2012 Pronunciation Evaluation week 12
This week, I am trying to extend the TIMIT statistics to 5 or 6 per each phoneme based on syllable position or I can do CART modelling to predict duration and acoustic score based on training. I did this to some extent using wagon in speech tools.
in/~srikanth.ronanki/GSoC/PE_
database/ and the description of the database is here at http://researchweb.iiit.ac.in/~srikanth.ronanki/GSoC/PE_database/description.txt
Regarding mis-pronunciation detection accuracy, I collected data from 8 non-native speakers with 5 words being recorded 10 times in both correct and wrong ways and 5 sentences being recorded 3-5 times in both correct and wrong ways. Here is the link to it @ http://researchweb.iiit.ac.
database/ and the description of the database is here at http://researchweb.iiit.ac.in/~srikanth.ronanki/GSoC/PE_database/description.txt
I need to split each speaker's data into individual files which is a tedious task and taking some time. Somehow, I completed with one speaker's data and the current text-independent system is doing good. 46 out of 50 correct words are detected good pronunciation and 42 words out of 50 wrong words are detected mis-pronunciation by setting a common threshold for all words. It takes one or two more days to give complete statistics.
In parallel, I completed phonological features and generated acoustic models for TIMIT database because I faced some difficulties to find complete set of wav files for WSJ database. But, I failed in both decoding and forced-alignment with the new models generated on phonological features. Even I failed in generating appropriate models with sphinx mfc features. Even though they generated properly, I didn't get results with forced-alignment or decode functions by replacing with WSJ models. I will try to overcome these issues by next week.
Ronanki: GSoC 2012 Pronunciation Evaluation Week 11
This week, I managed to do only data collection which is required to evaluate the project.
The database collection is over and is on different servers. I am trying to bring it on-to one place. You can find part amount of the data for one speaker here @ http://researchweb.iiit.ac.in~srikanth.ronanki/GSoC/PE_database/Sru/
The description of the data is at http://researchweb.iiit.ac.in/ ~srikanth.ronanki/GSoC/PE_ database/description.txt
The database collection is over and is on different servers. I am trying to bring it on-to one place. You can find part amount of the data for one speaker here @ http://researchweb.iiit.ac.in~srikanth.ronanki/GSoC/PE_database/Sru/
The description of the data is at http://researchweb.iiit.ac.in/
Ronanki: GSoC 2012 Pronunciation Evaluation Week 10
This week, I explored CART models a little bit, but couldn't complete it. The models are trained using wagon in speech tools with the following contextual information:
Current phone, previous phone, next phone, syllable postion, phonological features, phone type etc.,
Complete list of features are listed in the below URL:
Once the training is completed, the output is built in a tree which is given as input along with testing data to wagon_test in speech tools and there by it predicts the duration of the each phone using the contextual information using tree structure.
Regarding replacing of traditional MFCC features either with PNCC or phonological features, I need to compute acoustic models for WSJ database replacing these features instead of MFCC. It's in process, and once acoustic models are built, the rest of the testing process is same.
Work to do:
By next week, I would be able to complete one of these two and the next one thereafter. In the final week, I upload all codes to svn and integrate these new techniques with the current working pronunciation evaluation model @ http://talknicer.net/~ronanki/
Ronanki: GSoC 2012 Pronunciation Evaluation week 9
This week, I finished with my random phrase pronunciation evaluation and is in testing phase @ http://talknicer.net/~ronanki/test/index.html
The system can provide evaluation scoring for any random sentence. It also gives feedback for mispronunciation and rate of duration at word level. Please, test the system and mail me the bugs if any. Please avoid giving proper nouns and punctuation marks while testing the system.
For doing this, I evaluated entire TIMIT dataset and the statistics for each phone are evaluated at three positions:
Begin/Middle/End (0/1/2). The count in the last column represents the number of times each phone occurred at each position. The statistics are @ http://talknicer.net/~ronanki/phrase_data/statistics/TIMIT_statistics.txt
Next week, I am going to implement CART models so that each phone can be compared with respective phone in better context. Regarding features, I studied about Power Normalized Cepstral Coefficients (PNCC) which are more robust towards speech recognition even in noisy environment. PNCC are 13 in dimension, computationally more cost than MFCC but performs better than MFCC in speech recognition. I downloaded the available matlab code @ http://www.cs.cmu.edu/~robust/archive/algorithms/PNCC_IEEETran/ and trying some experiments on nTIMIT database. I also implemented phonological mapping with current state of spectral features (MFCC) using ANN. Currently, I am in testing phase of speech recognition using all these features.
Ronanki: GSoC 2012 Pronunciation Evaluation week 8
This week, I mainly concentrated on integrating everything with web @
http://talknicer.net/~ronanki/test/
http://talknicer.net/~ronanki/test/
The following are the ones which are integrated:
1. File upload option with different formats (wav/wma/mp3) is provided.
2. All test cases are evaluated while recording and it allows only those recordings which are near to the perfect case.
3. The calculate score button provides the feedback page @ http://talknicer.net/~ronanki/test/scores_page.html (still some of the columns in UI are under construction)
4. Phrase entry as per user's choice and then score calculation page @ http://talknicer.net/~ronanki/test/random.html is also under construction.
5. As of now, statistics for random phrase entry as per user's choice are derived from TIMIT database which covers 630 speakers with 10 recordings from each one.
Next Tasks:
1. Feature extraction (Power-Normalized Cepstral Coefficients and phonological features)
2. CART models (for efficient score calculation in random phrase method based on contextual information)
Regarding under construction pages, the back-end codes were developed and uploaded at sourceforge. Only, the web pages need to be build dynamically. Will be done in parallel with the current next tasks.
Ronanki: GSoC 2012 Pronunciation Evaluation week 7
Last week, I continued to work on spectral features and phonological features and their mapping based on neural network training for first few days. Based on forced-alignment/manual labels if exists, these phonological features http://talknicer.net/~ronanki/phonological_features/feature_stream for each phone in a phrase are repeated against it's spectral features. I am looking over CSLU Toolkit which uses a neural net for feature-to-diphone decoding and stopped at that point to work out later after mid-evaluation.
Later, I worked on integration of acoustic/duration scores along with edit-distance grammar decoding with the current website for exemplar outlier analysis.
I tried with many test cases such as
1. silence
2. noisy speech
3. Junk speech
4. Random sentence
5. Actual sentence shortened in the end
6. Actual sentence skipped the beginning
In test cases from 1-6, the forced alignment did not reach the final state and failed to create phone segmentation file, label file which contains acoustic scores, phone labels respectively.
7. Actual sentence
8. Actual sentence with more silence both at beginning and end
9. Actual sentence with one small word skip in the middle of phrase
10. Similar sounding sentences such as
I tried with many test cases such as
1. silence
2. noisy speech
3. Junk speech
4. Random sentence
5. Actual sentence shortened in the end
6. Actual sentence skipped the beginning
In test cases from 1-6, the forced alignment did not reach the final state and failed to create phone segmentation file, label file which contains acoustic scores, phone labels respectively.
7. Actual sentence
8. Actual sentence with more silence both at beginning and end
9. Actual sentence with one small word skip in the middle of phrase
10. Similar sounding sentences such as
Ex. Utterance:
Approach the teaching of pronunciation with more confidence
Tested Similar sounding:
a. Approach the teaching opponents the nation with over confidence
b. Approach the preaching opponents the nation with confidence
In test cases from 7-10, the forced alignment worked and generated acoustic scores, phone labels. Then, I moved on to edit-distance grammar decoding testing accuracy on cases 7-10 so that I can set a threshold parameter to distinguish between cases (7,8,9) and (10)
Earlier, I tested for cases 7,8 with phrase decoder in edit-distance and reported it around 73% and the accuracy is < 40% for case 10 so that I can easily set the threshold parameter such as accuracy = x>0.4 ? T : F
I also discussed with my mentor James Salsman on giving weights to words based on parts of speech for phrase output score and here is what he proposed and all the units are in db which represents relative loudness in English.
(%wt, %pos); # scoring weights and names of parts of speech
$wt{'q'} = 1.0; $pos{'q'} = 'quantifier';$wt{'n'} = 0.9; $pos{'n'} = 'noun';
$wt{'v'} = 0.9; $pos{'v'} = 'verb';
$wt{'-'} = 0.8; $pos{'-'} = 'negative';
$wt{'w'} = 0.8; $pos{'w'} = 'adverb';
$wt{'m'} = 0.8; $pos{'m'} = 'adjective';
$wt{'o'} = 0.7; $pos{'o'} = 'pronoun';
$wt{'s'} = 0.6; $pos{'s'} = 'possessive';
$wt{'p'} = 0.6; $pos{'p'} = 'preposition';
$wt{'c'} = 0.5; $pos{'c'} = 'conjunction';
$wt{'a'} = 0.4; $pos{'a'} = 'article';
Hope, I do it and launch the site after integrating everything before mid-evaluation submission.
Ronanki: GSoC 2012 Pronunciation Evaluation Week 6
I uploaded all my codes (except few ongoing) here at
http://cmusphinx.svn.sourceforge.net/viewvc/cmusphinx/branches/speecheval/ ronanki/scripts/ . Please follow README files in each folder for detailed instructions on how to use them.
This week, I have concentrated on new features for speech recognition. I read a paper on Power-Normalized Cepstral Coefficients [1] which are more robust towards speech recognition and a few papers on phonological features [2],[3]. I hope to investigate mapping the acoustic speech features of each phoneme derived from machine phonetic transcription to phonological features. Using this mapping, mispronunciations at phone level can be identified using phonological features along with acoustic pronunciation scores and edit distances. I got some mapping here at http://talknicer.net/~ronanki/phonological_features/ based on those papers.
Ongoing tasks:
1. In random phrase scoring method, another column is added to store the position of each phone with respect to word (begin/middle/end) such that each phone will have three statistics
http://talknicer.net/~ronanki/phrase_data/all_phrases_stats_position
http://talknicer.net/~ronanki/phrase_data/all_phrases_stats_position
2. Standard word scores are derived along with phoneme standard (acoustic + duration) scores in the current forced-alignment.
3. Linking edit-distance algorithm with pronunciation evaluation website
4. Complete a full-pledged website at http://talknicer.net/~ronanki/test/ with all test cases (junk speech, silence, misread etc.,) before mid-evaluation and publicize the system so that it can be tested by large number of users.
References:
[1] Chanwoo Kim and Richard M.Stern, "Power-Normalized Cepstral Coefficients (PNCC) for Robust Speech Recognition", ICASSP 2012.
[2] Katrin Kirchhoff a, Gernot A. Fink b, Gerhard Sagerer b, "Combining acoustic and articulatory feature information for robust speech recognition", Speech Communications 37 (2002) 303–319.
[3] S. King and P. Taylor, “Detection of phonological features in continuous speech using neural networks,” Computer Speech and Language, vol. 14, no. 4, pp. 333–353, 2000.
Saturday, July 14, 2012
Daily progress reports in comments here
Quick mentor note: We are converting to daily progress reports which I will combine into draft blog posts that the students will proofread, copy-edit, and approve for publication. This will help keep all three of us on schedule. Sorry I am behind. The good news is that both students made it from "on schedule" to "ahead of schedule" in a sprint for the evaluations.
Congratulations, Troy and Ronanki!
Please post your daily-ish (4 or more per week) progress reports here. Thanks!
Congratulations, Troy and Ronanki!
Please post your daily-ish (4 or more per week) progress reports here. Thanks!
Tuesday, July 10, 2012
Troy: GSoC 2012 Pronunciation Evaluation Week 5
Sorry for the late update. The following are the things I did in Week 5; mainly problem solving.
1) Solving the Flash-based recorder update which prevented users from using their microphones.
At the beginning, before the Flash player 11.2 and 11.3 update, the audio recorder I created using Flex worked fine. Users could simply right click the recorder and select the "Settings" to allow microphone access. However, with the new updates, that option is disabled without any error message.
To solve this problem, people suggested adding websites into the online global privacy list. However, after trying many times that was still not working for the audio recorder.
Furthermore, http://englishcentral.com/ which also uses Flash-based recording has a popup window from their recording button (a microphone image) with the Flash Microphone privacy setting dialogue. Checking the accessibility of microphone in code and prompting for the setting dialogue when necessary helps provide the solution:
First, checking whether the microphone is available, if not show the microphone list dialogue of Flash object ask the user to plugin a microphone:
var mic:Microphone = Microphone.getMicrophone();
if(!mic) {
Alert.show("No microphone available");
debug("No microphone available");
Security.showSettings("microphone");
}
Otherwise, check whether the microphone is accessible or not, if it is muted, prompt the privacy dialogue to ask user to allow the microphone access:
if(mic.muted) {
debug("Microphone muted!");
Security.showSettings("privacy");
}
With these testing during the initialization stage of the Flash recorder, it can allow users to enable the microphone access at the early beginning. One interesting thing is that after doing this, the "Setting" option of the Flash object now is clickable.
Now, looking back to the code solving the problem, which is so apparent, however, before you know the answer, it is really hard to predict.
2) Cross-browser Flash recorder compatibility
As the Flash recorder problem was solved as above, I was happy to update the source code in the trunk and our server and hoped to see the site working nicely. But the browser shows that the Flash recorder cannot load, the only information I got is "Error 2046"....
To try to solve this problem, I Googled a bunch of pages and tried several suggestions, the first which suggested I clear the browser cache and then set the Flash player to not save local cache and then re-enable its local cache (some kind of clear Flash player local cache), which gives some progress by changing "Error 2046" to "Error 2032".
For "Error 2032", there are mainly two groups of explanations, one saying there is something wrong with the URLs in Actionscript's HTTPRequests, which seems unlikely because those URLs are definitely correct and are under the same folder as the player. The other is an RSL problem of the mxmlc Flash compiler. To solve the RSL linkage problem, go to the "Flex Build Path" properties page, "Library path" tab and change the framework linkage to "merged into code".
[Mentor note: Requesting compatibility with earlier versions of Flash ActionScript using compiler switches may or may not help here.]
[Mentor note: Requesting compatibility with earlier versions of Flash ActionScript using compiler switches may or may not help here.]
3) Adding a password change page
4) Refining the user extra information update page to reflect the existing user information if available, instead of always showing the default values.
The website for exemplary recordings is now at a usable stage.
In this week, I will try to accomplish these things:
1) Phrase data entry for administrators (with text, exemplar pronunciations, homograph disambiguation, phonemes, parts of speech per word, etc.;
2) Design recording prompts to start our exemplary recording data collection;
3) Bug fixing and system testing;
4) Study the Amazon Mechanical Turk and start thinking how to incorporate our speech data collection on to that platform.
Ronanki: GSoC 2012 Pronunciation Evaluation Week 5
The basic scoring routine for the pronunciation evaluation system is now available at http://talknicer.net/~ ronanki/test/. The output is generated for each phoneme in the phrase and displays the total score.
(b) a three-phone decoder (contextual)
http://talknicer.net/~ ronanki/phrase_data/results_ edit_distance/output_3phones. txt
(c) an entire phrase decoder with neighboring phones
http://talknicer.net/~ ronanki/phrase_data/results_ edit_distance/output_compgram. txt
Please follow the README file in each folder for detailed instructions on how to use them.
These are the things I've accomplished in the fifth week of GSoC 2012:
1. Edit-distance neighbor grammar generation:
Earlier, I did this with:
(a) a single-phone decoder
http://talknicer.net/~ ronanki/phrase_data/results_ edit_distance/output_1phone. txt
http://talknicer.net/~
(b) a three-phone decoder (contextual)
http://talknicer.net/~
(c) an entire phrase decoder with neighboring phones
http://talknicer.net/~
This week, I added two more decoders: a word-decoder and a complete phrase decoder using each phoneme at each time
word-decoder: I used sox to split each wav file into words based on forced-alignment output and then presented each word as follows.
Ex: word - "with" is presented as
public <phonelist> = ( (W | L | Y) (IH) (TH) );
public <phonelist> = ( (W) (IH | IY | AX | EH) (TH) );
public <phonelist> = ( (W) (IH) (TH | S | DH | F | HH) );
The accuracy turned out to be better than single-phone/three-phone decoder, same as entire phrase decoder and the output of a sample test phrase is at http://talknicer.net/~ ronanki/phrase_data/results_ edit_distance/output_words.txt
Complete phrase decoder using each phoneme: This is again more similar to entire phrase decoder. This time I supplied neighboring phones for each phoneme at each time and fixed the rest of the phonemes in the phrase. Not a good approach, takes more time to decode. But, the accuracy is better than all the previous methods. The output is at http://talknicer.net/~ronanki/phrase_data/results_edit_distance/output_phrases.txt
The code for above methods are uploaded in cmusphinx sourceforge at http://cmusphinx.svn.sourceforge.net/viewvc/cmusphinx/branches/speecheval/ronanki/scripts/neighborphones_decode/
The code for above methods are uploaded in cmusphinx sourceforge at http://cmusphinx.svn.sourceforge.net/viewvc/cmusphinx/branches/speecheval/ronanki/scripts/neighborphones_decode/
Please follow the README file in each folder for detailed instructions on how to use them.
2. Scoring paradigm:
Phrase_wise:
The current basic scoring routine which is deployed at http://talknicer.net/~ ronanki/test/ aligns the test recording with the utterance using forced alignment in sphinx and generates a phone segmentation file. Each phoneme in the file is then compared with mean, std. deviation of the respective phone in phrase_statistics (http://talknicer.net/~ ronanki/phrase_data/phrase1_ stats.txt) and standard scores are calculated from z-scores of acoustic_score and duration.
Random_phrase:
I also derived statistics (mean score, std. deviation score, mean duration) for each phone in CMUphoneset irrespective of context using the exemplar recordings for all the three phrases (http://talknicer.net/~ ronanki/phrase_data/phrases. txt) which I have as of now. So, If a test utterance is given, I can test each phone in the random phrase with respective phone statistics.
Statistics are at : http://talknicer.net/~ ronanki/phrase_data/all_ phrases_stats (column count represents number of times each phone occurred)
Things to do in the upcoming week:
1. Use of an edit-distance grammar to derive standard scores such that the minimal effective training data set is required. [Mentor note: was "no training data," which is excluded.]
2. Use of the same grammar to detect the words that are having two correct different pronunciation (ex: READ/RED)
3. In a random phrase scoring method, another column can be added to store the position of each phone with respect to word (or SILence) such that each phone will have three statistics and can be compared better with the exemplar phonemes based on position.
4. Link all those modules to try to match experts' scores.
5. Provide feedback to the user with underlined mispronunciations, or numerical labels.
Future tasks:
1. Use of CART models in training to do better match of statistics for each phoneme in the test utterance with the training data based on contextual information
2. Use of phonological (power normalized cepstral?) features instead of mel-cepstral features, which are expected to better represent the state of pronunciation.
3. Develop a complete web-based system so that end user can test their pronunciation in an efficient way.
Wednesday, July 4, 2012
Ronanki: GSoC 2012 Pronunciation Evaluation Week 4
The source code for the functions below have been uploaded to http://cmusphinx.svn.sourceforge.net/viewvc/cmusphinx/branches/speecheval/ronanki/scripts/
Here are some brief notes on how to use those programs:
Method 1: (phoneme decode)
Path:
neighborphones_decode/one_phoneme/
Steps To Run:
1. Use split_wav2phoneme.py to split a sample wav file in to individual phoneme wav files
Usage: python split_wav2phoneme.py <input_phoneseg_file> <complete_phone_list> <input_wav_file> <out_split_dir>
2. Create split.ctl file using extracted split_wav directory
3. Run feature_extract.sh program to extract features for individual phoneme wav files
4. Java Speech Grammar Format (JSGF) files are already created in FSG_phoneme
5. Run jsgf2fsg.sh in FSG_phoneme to convert from jsgf to fsg.
6. Run decode_1phoneme.py to get the required output in output_decoded_phones.txt
Usage: python decode_1phoneme.py <input_split_ctl_file> <output_phone_file>
Method 2: (Three phones decode)
Path:
neighborphones_decode/three_phones/
Steps To Run:
1. Use split_wav2threephones.py to split a sample wav file in to individual phoneme wav files which consists of three phones the other two being served as contextual information for the middle one.
Usage: python split_wav2threephones.py <input_phoneseg_file> <ngb_key_mapper> <input_wav_file> <out_split_dir>
2. Create split.ctl file using extracted split_wav directory
3. Run feature_extract.sh program to extract features for individual phoneme wav files
4. Java Speech Grammar Format (JSGF) files are already created in FSG_phoneme
5. Run jsgf2fsg.sh in FSG_phoneme to convert from jsgf to fsg.
6. Run decode_3phones.py to get the required output in output_decoded_phones.txt
Usage: python decode_3phones.py <input_split_ctl_file> <output_phone_file>
Method 3: (Single/Batch phrase decode)
Path:
neighborphones_decode/phrases/
Steps To Run:
1. Run decode.sh program to get the required output in sample.out
2. Provide the input arguments such as grammar file, feats, acoustic models etc., for the input test phrase
3. Construct grammar file (JSGF) using my earlier scripts from phonemes2ngbphones and then use jsgf2fsg in sphinxbase to convert from JSGF to FSG which serves as input Language Model to sphinx3_decode
Here are some brief notes on how to use those programs:
Method 1: (phoneme decode)
Path:
neighborphones_decode/one_phoneme/
Steps To Run:
1. Use split_wav2phoneme.py to split a sample wav file in to individual phoneme wav files
Usage: python split_wav2phoneme.py <input_phoneseg_file> <complete_phone_list> <input_wav_file> <out_split_dir>
2. Create split.ctl file using extracted split_wav directory
3. Run feature_extract.sh program to extract features for individual phoneme wav files
4. Java Speech Grammar Format (JSGF) files are already created in FSG_phoneme
5. Run jsgf2fsg.sh in FSG_phoneme to convert from jsgf to fsg.
6. Run decode_1phoneme.py to get the required output in output_decoded_phones.txt
Usage: python decode_1phoneme.py <input_split_ctl_file> <output_phone_file>
Method 2: (Three phones decode)
Path:
neighborphones_decode/three_phones/
Steps To Run:
1. Use split_wav2threephones.py to split a sample wav file in to individual phoneme wav files which consists of three phones the other two being served as contextual information for the middle one.
Usage: python split_wav2threephones.py <input_phoneseg_file> <ngb_key_mapper> <input_wav_file> <out_split_dir>
2. Create split.ctl file using extracted split_wav directory
3. Run feature_extract.sh program to extract features for individual phoneme wav files
4. Java Speech Grammar Format (JSGF) files are already created in FSG_phoneme
5. Run jsgf2fsg.sh in FSG_phoneme to convert from jsgf to fsg.
6. Run decode_3phones.py to get the required output in output_decoded_phones.txt
Usage: python decode_3phones.py <input_split_ctl_file> <output_phone_file>
Method 3: (Single/Batch phrase decode)
Path:
neighborphones_decode/phrases/
Steps To Run:
1. Run decode.sh program to get the required output in sample.out
2. Provide the input arguments such as grammar file, feats, acoustic models etc., for the input test phrase
3. Construct grammar file (JSGF) using my earlier scripts from phonemes2ngbphones and then use jsgf2fsg in sphinxbase to convert from JSGF to FSG which serves as input Language Model to sphinx3_decode
Troy: GSoC 2012 Pronunciation Evaluation Week 4
[Project mentor note: I have been holding these more recent blog posts pending some issues with Adobe Flash security updates which periodically break cross-platform audio upload web browser solutions. We have decided to plan for a fail-over scheme using low-latency HTTP POST multipart/form-data binary Speex uploads to provide backup in case Flash/rtmplite fails again in the future. This might also support most of the mobile devices. Please excuse the delay and rest assured that progress continues and will continue to be announced at such time as we are confident that we won't need to contradict ourselves as browser technology for audio upload continues to develop. --James Salsman]
The data collection website now can provide basic capabilities. Anyone interested, please check out http://talknicer.net/~li-bo/datacollection/login.php and give it a try. If you encounter any problems, please let us know.
Here are my accomplishments from last week:
The data collection website now can provide basic capabilities. Anyone interested, please check out http://talknicer.net/~li-bo/datacollection/login.php and give it a try. If you encounter any problems, please let us know.
Here are my accomplishments from last week:
1) Discussed the project schema design with the project mentor and created the database with MySQL. The current schema is shown at http://talknicer.net/w/Database_schema. During the development of the user interface, slight modifications were made to refine the database schema, such as the age field in for the users table: Storing the user's birth date is much better. Other similar changes were made. I learned that good database design comes from practice, not purely imagination.
2) Implemented the two types of user registration pages: one for students and one for exemplar uploaders. To avoid redundant work and allow for fewer constraints on types of users, the registration process involves two steps: one basic registration and one extra information update. For students, only the basic one is mandatory, but the exemplar uploaders have to fill out two separate forms.
3) Added extra supporting functionality for user management, including password reset and mode selection for users with more than one type.
4) Incorporated the audio recorder with the website for recording and uploading to servers.
This week I plan to:
1) Complete the user interface for adding phrase prompts;
2) Test the resulting system;
3) Design the pronunciation learning game for student users.
Tuesday, June 19, 2012
Ronanki: GSoC 2012 Pronunciation Evaluation Week 3
I finally finished trying different methods for edit-distance grammar decoding. Here is what I have tried so far:
1. I used sox to split each input wave file into individual phonemes based on the forced alignment output. Then, I tried decoding each phoneme against its neighboring phonemes. The decoding output matched the expected phonemes only 12 out of 41 times for the exemplar recordings in the phrase "Approach the teaching of pronunciation with more confidence"
The accuracy for that method of edit distance scoring was 12/41 (29%) -- This naive approach didn't work well.
2. I used sox to split each input wave file into three phonemes based on the forced alignment output and position of the phoneme. If a phoneme is at beginning of its word, I used a grammar like: <current phone> <next> <next2next> and if it is middle phoneme: <previous> <current> <next> and if it is at the end: <previous2previous> <previous> <current> and supplied neighboring phones for the current phone and fixed the other two. For example, the phoneme IH in word "with" is encoded as: ((W) (IH|IY|AX|EH) (TH))
The accuracy was 19/41 (46.2%) -- better because of more contextual information.
3. I used the entire phrase with each phoneme encoded in a sphinx3_decode grammar file for matching a sequence of alternative neighboring phonemes which looks something like this:
#JSGF V1.0;
grammar phonelist;
public <phonelist> = (SIL (AH|AE|ER|AA) (P|T|B|HH) (R|Y|L) (OW|AO|UH|AW) (CH|SH|JH|T) (DH|TH|Z|V)(AH|AE|ER|AA) (T|CH|K|D|P|HH) (IY|IH|IX) (CH|SH|JH|T) (IH|IY|AX|EH) (NG|N) (AH|AE|ER|AA) (V|F|DH) (P|T|B|HH)(R|Y|L) (AH|AE|ER|AA) (N|M|NG) (AH|AE|ER|AA) (N|M|NG) (S|SH|Z|TH) (IY|IH|IX) (EY|EH|IY|AY) (SH|S|ZH|CH) (AH|AE|ER|AA) (N|M|NG) (W|L|Y) (IH|IY|AX|EH) (TH|S|DH|F|HH) (M|N) (AO|AA|ER|AX|UH) (R|Y|L) (K|G|T|HH) (AA|AH|ER|AO) (N|M|NG) (F|HH|TH|V) (AH|AE|ER|AA) (D|T|JH|G|B) (AH|AE|ER|AA) (N|M|NG) (S|SH|Z|TH) SIL);
The accuracy for this method of edit distance scoring was 30/41 (73.2%) -- the more contextual information provided, better the accuracy.
1. I used sox to split each input wave file into individual phonemes based on the forced alignment output. Then, I tried decoding each phoneme against its neighboring phonemes. The decoding output matched the expected phonemes only 12 out of 41 times for the exemplar recordings in the phrase "Approach the teaching of pronunciation with more confidence"
The accuracy for that method of edit distance scoring was 12/41 (29%) -- This naive approach didn't work well.
2. I used sox to split each input wave file into three phonemes based on the forced alignment output and position of the phoneme. If a phoneme is at beginning of its word, I used a grammar like: <current phone> <next> <next2next> and if it is middle phoneme: <previous> <current> <next> and if it is at the end: <previous2previous> <previous> <current> and supplied neighboring phones for the current phone and fixed the other two. For example, the phoneme IH in word "with" is encoded as: ((W) (IH|IY|AX|EH) (TH))
The accuracy was 19/41 (46.2%) -- better because of more contextual information.
3. I used the entire phrase with each phoneme encoded in a sphinx3_decode grammar file for matching a sequence of alternative neighboring phonemes which looks something like this:
#JSGF V1.0;
grammar phonelist;
public <phonelist> = (SIL (AH|AE|ER|AA) (P|T|B|HH) (R|Y|L) (OW|AO|UH|AW) (CH|SH|JH|T) (DH|TH|Z|V)(AH|AE|ER|AA) (T|CH|K|D|P|HH) (IY|IH|IX) (CH|SH|JH|T) (IH|IY|AX|EH) (NG|N) (AH|AE|ER|AA) (V|F|DH) (P|T|B|HH)(R|Y|L) (AH|AE|ER|AA) (N|M|NG) (AH|AE|ER|AA) (N|M|NG) (S|SH|Z|TH) (IY|IH|IX) (EY|EH|IY|AY) (SH|S|ZH|CH) (AH|AE|ER|AA) (N|M|NG) (W|L|Y) (IH|IY|AX|EH) (TH|S|DH|F|HH) (M|N) (AO|AA|ER|AX|UH) (R|Y|L) (K|G|T|HH) (AA|AH|ER|AO) (N|M|NG) (F|HH|TH|V) (AH|AE|ER|AA) (D|T|JH|G|B) (AH|AE|ER|AA) (N|M|NG) (S|SH|Z|TH) SIL);
The accuracy for this method of edit distance scoring was 30/41 (73.2%) -- the more contextual information provided, better the accuracy.
Here is some sample output, written both one below the other to have a comparison of phonemes.
Forced-alignment output: AH P R OW CH DH AH T IY CH IH NG AH V P R AH N AH N S IY EY SH AH N W IH TH M
Decoder output: ER P R UH JH DH AH CH IY CH IY N AH V P R ER N AH NG Z IY EY SH AH N W IH TH M
In this case, both are forced outputs. So, if someone skips or inserts something during phrase recording, it may not work well. We need to think a method to solve this. Will a separate pass decoder grammar to test for whole word or syllable insertions and deletions work?
Things to do for next week:
1. We are trying to combine acoustic standard scores (and duration) from forced alignment with an edit distance scoring grammar, which was reported to have better correspondence with human expert phonologists.
2. Complete a basic demo of the pronunciation evaluation without edit distance scoring from exemplar recordings using conversion of phoneme acoustic scores and durations to normally distributed scores, and then using those to derive their means and standard deviations, so we can produce per-phoneme acoustic and duration standard scores for new uploaded recordings.
3. Finalize the method for mispronunciation detection at phoneme and word level.
Troy: GSoC 2012 Pronunciation Evaluation Week 3
Week 3 accomplishments:
1. Tailored the previous ActionScript/MXML audio recorder to provide only audio recording and playback functionality and began interfaces for interaction with the web site pages using JavaScript.
2. Discussed database design and schema with the project mentor and continued refining and testing the schema and initial database records.
Plans for Week 4:
1. Fix the database schema for prompts to handle word lists with (possibly multiple) pronunciations and parts of speech, along with a separate text string for phrase display which can include arbitrary punctuation and might not have as clear word boundaries because of that punctuation--such as this phrase in dashes--etc.
2. Create separate registration interface for users who will be uploading exemplar pronunciation recordings.
3. Create an interface to add phrase prompts and mark their words' disambiguated pronunciation and parts of speech.
4. Create the interface to upload exemplar recordings for prompts.
5. Think about game play and refine its schema once the basic features are decided.
Sunday, June 10, 2012
Ronanki: GSoC 2012 Pronunciation Evaluation Week 2
[It is my fault this update is late, not Ronanki's. --James Salsman]
Following last week's discussion describing how to obtain phoneme acoustic scores from sphinx3_align, here is some additional detail pertaining to two of the necessary output arguments:
1. Following up on the discussion at https://sourceforge.net/projects/cmusphinx/forums/forum/5471/topic/4583225, I was able to produce acoustic scores for each frame, and thereby also for each phoneme in a single recognition pass. Add the following code to the write_stseg function in main_align.c and use the state segmentation parameter -stsegdir as an argument to the program:
2. By using the phone segmentation parameter -phsegdir as an argument to the program, the acoustic scores for each phoneme can be calculated. The output sequence for the word "approach" is as follows:
SFrm EFrm SegAScr Phone
0 9 -64725 SIL
10 21 -63864 AH SIL P b
22 33 -126819 P AH R i
34 39 -21470 R P OW i
40 51 -69577 OW R CH i
52 64 -55937 CH OW DH e
Each phoneme in the "Phone" column is represented as <Aligned_phone> <Previous_phone> <Next_phone> <position_in_the_word (b-begin, i-middle, e-end)>. The full command line usage for this output is:
The forced alignment system, sphinx3_align, produced this output:
AH P R OW CH DH AH T IY CH IH NG AH V P R AH N AH N S IY EY SH AH N W IH TH M AO R K AA N F AH D AH N S
c. The aligned output will have the same number of phones as from forced alignment. So, we need to test two things for each phoneme:
d. Then, we can run sphinx3_align with this outcome against the same wav file to check whether the acoustic scores actually indicate a better match.
Following last week's discussion describing how to obtain phoneme acoustic scores from sphinx3_align, here is some additional detail pertaining to two of the necessary output arguments:
1. Following up on the discussion at https://sourceforge.net/projects/cmusphinx/forums/forum/5471/topic/4583225, I was able to produce acoustic scores for each frame, and thereby also for each phoneme in a single recognition pass. Add the following code to the write_stseg function in main_align.c and use the state segmentation parameter -stsegdir as an argument to the program:
char str2[1024];
align_stseg_t *tmp1;
for (i = 0, tmp1 = stseg; tmp1; i++, tmp1 = tmp1->next) {
mdef_phone_str(kbc->mdef, tmp1->pid, str2);
fprintf(fp, "FrameIndex %d Phone %s PhoneID %d SenoneID %d state %d Ascr %11d \n",
i, str2, tmp1->pid, tmp1->sen, tmp1->state, tmp1->score);
i, str2, tmp1->pid, tmp1->sen, tmp1->state, tmp1->score);
}
SFrm EFrm SegAScr Phone
0 9 -64725 SIL
10 21 -63864 AH SIL P b
22 33 -126819 P AH R i
34 39 -21470 R P OW i
40 51 -69577 OW R CH i
52 64 -55937 CH OW DH e
Each phoneme in the "Phone" column is represented as <Aligned_phone> <Previous_phone> <Next_phone> <position_in_the_word (b-begin, i-middle, e-end)>. The full command line usage for this output is:
$ sphinx3_align -hmm wsj_all_cd30.mllt_cd_cont_4000 -dict cmu.dic -fdict phone.filler -ctl phone.ctl -insent phone.insent -cepdir feats -phsegdir phonesegdir -phlabdir phonelabdir -stsegdir statesegdir -wdsegdir aligndir -outsent phone.outsent
Work in progress:
1. It's very important to weight word scores by the words' part of speech (articles don't matter very much if they are omitted, but nouns, adjectives, verbs, and adverbs are the most important.) Troy has designed a basic database schema at http://talknicer.net/w/Database_schema in which the part of speech is one of the fields in the "prompts" table along with acoustic and duration standard scores in the "scores" table.
2. I put some exemplar recordings for three phrases the project mentor had collected at http://talknicer.net/~ronanki/Datasets/ in each subdirectory there for each of the three phrases. The description of the phrases is at http://talknicer.net/~ronanki/Datasets/files/phrases.txt.
3. I ran sphinx3_align for that sample data set. I wrote a program to calculate mean and standard deviations of phoneme acoustic scores, and the mean duration of each phoneme. I also generated neighbor phonemes for each of the phrases, and the output is written in this file: http://talknicer.net/~ronanki/Datasets/out_ngb_phonemes.insent
4. I also tried some of the other sphinx3 executables such as sphinx3_decode, sphinx3_livepretend, and sphinx3_continous for mispronunciation detection. For the sentence, "Approach the teaching of pronunciation with more confidence." (phrase 1), I used this command:
$ SPHINX3DECODE -hmm ${WSJ} -fsg phone.fsg -dict basicphone.dic -fdict phone.filler -ctl new_phone.ctl -hyp phone.out -cepdir feats -mode allphone -hypseg phone_hypseg.out -op_mode 2
The decoder, sphinx3_decode, produced this output:
P UH JH DH CH IY CH Y N Z Y EY SH AH W Z AO K AA F AH N Z
The forced alignment system, sphinx3_align, produced this output:
The sphinx3_livepretend and sphinx3_continous commands produce output in words using language models and acoustic models along with a complete dictionary of expected words:
approach to teaching opponents the nation with more confidence
Plans for the coming week:
1. Write and test audio upload and pronunciation evaluation for per-phoneme standard scores.
2. Since there are many deletions in the edit distance scoring grammars tried so far, we need to modify the grammar file and/or the method we are using to detect whether neighboring phonemes match more closely. Here is my idea of finding neighboring phonemes by dynamic programming:
a. Run the decoder to get the best possible output
b. Align the decoder output to forced-alignment output using a dynamic programming string matching algorithm
c. The aligned output will have the same number of phones as from forced alignment. So, we need to test two things for each phoneme:
- If the phone is same as expected phoneme, no need to do anything
- If the phone is not as expected phoneme, check that phone in the list of neighboring phonemes of the expected phoneme.
d. Then, we can run sphinx3_align with this outcome against the same wav file to check whether the acoustic scores actually indicate a better match.
3. As an alternative to the above, I used sox to split each input wave file in to individual phoneme wav files using the forced alignment phone labels, and then used a separate recognition pass on each tiny speech segment. Now, I am writing separate grammar files for the neighboring phonemes for each phoneme. Once I complete them, I will check the output using decoder for each phoneme segment. This should provide for more accurate assessment of mispronunciations.
4. I will update the wiki here at http://cmusphinx. sourceforge.net/wiki/ pronunciation_evaluation with my current tasks and milestones.
Tuesday, June 5, 2012
Troy: GSoC 2012 Pronunciation Evaluation Week 2
These are the things I've accomplished in the second week of GSoC 2012:
1. Set up a cron job for the rtmplite server to automatically check whether the process is still running or not. If it is stopped, restart it. This will allow the server to stay up if the machine gets rebooted, and will allow the server to spawn subprocesses without being stopped by job control as happens when the process is put into the background from a terminal shell. To accomplish this, I first created a .process file in my home directory with the rtmplite server's process id number as its sole contents. You can use 'top' or 'ps' to find out the process id of the server. Then I created this shell script file to check the status of the rtmplite server process:
This table was created by the following SQL command:
1. Set up a cron job for the rtmplite server to automatically check whether the process is still running or not. If it is stopped, restart it. This will allow the server to stay up if the machine gets rebooted, and will allow the server to spawn subprocesses without being stopped by job control as happens when the process is put into the background from a terminal shell. To accomplish this, I first created a .process file in my home directory with the rtmplite server's process id number as its sole contents. You can use 'top' or 'ps' to find out the process id of the server. Then I created this shell script file to check the status of the rtmplite server process:
pidfile=~/.process if [ -e "$pidfile" ] then # check whether the process is running rtmppid=`/usr/bin/head -n 1 ${pidfile} | /usr/bin/awk '{print $1}'`; # restart the process if not running if [ ! -d /proc/${rtmppid} ] then /usr/bin/python ${exefile} -r ${dataroot} & rtmppid=$! echo "${rtmppid}" > ${pidfile} echo `/bin/date` "### rtmplite process restarted with pid: ${rtmppid}" fi fi This script first checks whether the .process file exists or not. If we don't want the cron job to check for this process temporarily (such as when we apply patches to the program), we can simply delete this file and it won't check on or try to restart the server; after out maintenance, recreate the file with the new process id, and the checking will automatically resume. The last and also the most important step is to schedule this task in cron by creating following item with the command crontab -e* * * * * [path_to_the_script]/check_status.sh This causes the cron system to run this script every minute, thereby checking the rtmplite server process every minute.2. Implemented web server user login and registration pages using MySQL and HTML. We use a MySQL database for storing user information, so I designed and created this table for user information in the server's mysql database:
Field | Type | Comments |
---|---|---|
userid | INTEGER | Compulsory, automatically increased, primary key |
VARCHAR(200) | Compulsory, users are identified by emails | |
password | VARCHAR(50) | Compulsory, encrypted using SHA1, at least 8 alphanumeric characters |
name | VARCHAR(100) | Not compulsory, default 'NULL' |
age | INTEGER | Not compulsory, default 'NULL', accepted values [0,150] |
sex | CHAR(1) | Not compulsory, default 'NULL', accepted values {'M', 'F'} |
native | CHAR(1) | Not compulsory, default 'NULL', accepted values {'Y', 'N'}. Indicating the user is a native English speaker or not. |
place | VARCHAR(1000) | Not compulsory, default 'NULL'. Indicating the place when the user lived at the age between 6 and 8. |
accent | CHAR(1) | Not compulsory, default 'NULL', accepted values {'Y', 'N'}. Indicating the user has a self-reported accent or not. |
This table was created by the following SQL command:
CREATE TABLE users (
userid INTEGER NOT NULL AUTO_INCREMENT,
email VARCHAR(200) NOT NULL,
password VARCHAR(50) NOT NULL,
name VARCHAR(100),
age INTEGER,
sex SET('M', 'F'),
native SET('Y', 'N') DEFAULT 'N',
place VARCHAR(1000),
accent SET('Y', 'N'),
CONSTRAINT PRIMARY KEY (userid),
CONSTRAINT chk_age CHECK (age>=0 AND age<=150)
);
I also prototyped the login and simple registration pages are in HTML. Here are their preliminary screenshots:If you like, you can go to this page to help us test the system: http://talknicer.net/~li-bo/datacollection/login.php. On the server, we use PHP to retrive the form information from the login and registration pages, perform an update or query in mysql database, and then send data back in HTML.
The recording interface, has also been modified to use HTML instead of pure Flex as earlier. The page currently displays well, but there is no event interaction between HTML and Flash yet.
3. Database schema design for the entire project: Several SQL tables have been designed to store the various information used by all aspects of this project. Detailed table information can be found on our wiki page: http://talknicer.net/w/Database_schema. Here is a brief discussion.
First, the user table shown above will be augmented to keep two additional kinds of user information: one for normal student users and one for those who are providing exemplar recordings. Student users, when they can provide correct pronunciation, should also be allowed to contribute to the exemplar recordings. Also if exemplar recorders register through the website, they have to show they are proficient enough to contribute a qualified exemplar recording, so we should be able to use the student evaluation system to qualify them for uploading exemplar contributions.
There are several other tables for additional information such as languages for a list of languages defined by the ISO in case we may extend our project to other languages; a region table to store some idea of the user's accent; prompts table for the list of text resources will be used for pronunciation evaluation. Then are also tables to log the recordings the users do and tables for set of tests stored in the system.
Here are my plans for the coming week:
1. Discuss details of the game specification to finish the last part of schema design.
2. Figure out how to integrate the Flash audio recorder with the HTML interface using bidirectional communication between ActionScript and JavaScript.
3. Implement the student recording interface.
4. Further tasks can be found at: http://talknicer.net/w/To_do_list
Subscribe to:
Posts (Atom)