Author
Abstract
Combining the communicative language competence model and the perspective of multimodal research, this research proposes a research framework for oral communicative competence under the multimodal perspective. This not only truly reflects the language communicative competence but also fully embodies the various contents required for assessment in the basic attributes of spoken language. Aiming at the feature sparseness of the user evaluation matrix, this paper proposes a feature weight assignment algorithm based on the English spoken category keyword dictionary and user search records. The algorithm is mainly based on the self-built English oral category classification dictionary and converts the user’s query vector into a user-English-speaking type vector. Through the calculation rules proposed in this paper, the target user’s preference score for a specific type of spoken English is obtained, and this score is assigned to the unrated item of the original user’s feature matrix as the initial starting score. At the same time, in order to solve the problem of insufficient user similarity calculation accuracy, a user similarity calculation algorithm based on “Synonyms Cilin Extended Edition†and search records is proposed. The algorithm introduces “Synonyms Cilin†to calculate the correlation between the semantic items, vocabulary, and query vector in the user query record to obtain the similarity between users and finally gives a user similarity calculation that integrates user ratings and query vectors method. For the task of Chinese grammar error correction, this article uses two methods of predicting the relationship between words in the corpus, Word2Vec and GloVe, to train the word vectors of different dimensions and use the word vectors to represent the text features of the experimental samples, avoiding sentences brought by word segmentation. On the basis of word vectors, the advantages and disadvantages of CNN, LSTM, and SVM models in this shared task are analyzed through experimental data. The comparative experiment shows that the method in this paper has achieved relatively good results.
Suggested Citation
Xiaojun Chen & Zhihan Lv, 2021.
"Synthetic Network and Search Filter Algorithm in English Oral Duplicate Correction Map,"
Complexity, Hindawi, vol. 2021, pages 1-12, April.
Handle:
RePEc:hin:complx:9960101
DOI: 10.1155/2021/9960101
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:complx:9960101. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.