Диссертация (1090484), страница 13
Текст из файла (страница 13)
No 8. С.97-98[32] Ле Мань Ха. Оптимизация алгоритма KNN для классификации текстов //Труды МФТИ. 2015. Т. 7, No 3. С. 92–94.[33] Ле Мань Ха - Прогнозирование настроения человека по анализу текста 55-я научная конференция МФТИ 11/2012[34] Ле Мань Ха - Прогнозирование настроения человека по анализу текста - XIВсероссийская научная конференция «Нейрокомпьютеры и их применение»3/2013[35] Le Manh Ha - Sentiment Estimation - Международная конференция "Инжиниринг и Телекоммуникации - EnT 11/2014"[36] Ле Мань Ха - Спам-фильтр с использованием метода опорных векторов 57-я научная конференция МФТИ 11/2014[37] Ле Мань Ха - Классификация текстов с использованием метода опорныхвекторов - XIII Всероссийская научная конференция «Нейрокомпьютеры иих применение» 3/2015118[38] Ле Мань Ха - Алгоритм KNN для классификации текстов и его оптимизация - XIV Всероссийская научная конференция «Нейрокомпьютеры и ихприменение» 3/2016[39] Ле Мань Ха - Нейросетевые подходы к классификации текстов на основеморфологического разбора - XV Всероссийская научная конференция «Нейрокомпьютеры и их применение» 3/2017[40] А.
А. Харламов, Ле Мань Ха. Нейросетевые подходы к классификации текстов на основе морфологического анализа // Труды МФТИ. 2017. Т. 9, No2. С. 143–150.[41] Нгуен Нгок Зиеп, Ле Мань Ха. Нейросетевой метод снятия омонимии //Труды МФТИ. 2015. Т.7, No 3. С.174-182[42] Wu HC, Luk RW, Wong KF, Kwok KL. Interpreting tf-idf term weights asmaking relevance decisions. ACM Transactions on Information Systems (TOIS).2008 Jun 1;26(3):13.[43] Maas AL, Daly RE, Pham PT, Huang D, Ng AY, Potts C.
Learning wordvectors for sentiment analysis. InProceedings of the 49th Annual Meeting ofthe Association for Computational Linguistics: Human Language TechnologiesVolume 1 2011 Jun 19 (pp. 142-150). Association for Computational Linguistics.[44] Pennington J. , Socher R. , Manning C. D. Glove: Global Vectors for WordRepresentation. InEMNLP 2014 Oct 25 (Vol. 14, pp. 1532-1543).[45] Goethals B, Laur S, Lipmaa H, Mielikäinen T. On private scalar productcomputation for privacy-preserving data mining. InICISC 2004 Dec 2 (Vol.
3506,pp. 104-120).[46] McCallum A. , Nigam K. A comparison of event models for naive bayes textclassification. InAAAI-98 workshop on learning for text categorization 1998 Jul26 (Vol. 752, pp. 41-48).119[47] Rish I. An empirical study of the naive Bayes classifier. InIJCAI 2001 workshopon empirical methods in artificial intelligence 2001 Aug 4 (Vol. 3, No. 22, pp.41-46). IBM.[48] Domingos P, Pazzani M. On the optimality of the simple Bayesian classifierunder zero-one loss. Machine learning. 1997 Nov 1;29(2):103-30.[49] Dumais S, Platt J, Heckerman D, Sahami M.
Inductive learning algorithms andrepresentations for text categorization. InProceedings of the seventh internationalconference on Information and knowledge management 1998 Nov 1 (pp. 148-155).ACM.[50] Zhang H, Li D. Naı̈ve Bayes text classifier. InGranular Computing, 2007. GRC2007.
IEEE International Conference on 2007 Nov 2 (pp. 708-708). IEEE.[51] Frank E. , Bouckaert R. R. Naive bayes for text classification with unbalancedclasses. InEuropean Conference on Principles of Data Mining and KnowledgeDiscovery 2006 Sep 18 (pp. 503-510). Springer Berlin Heidelberg.[52] Khan A, Baharudin B, Lee LH, Khan K. A review of machine learningalgorithms for text-documents classification.
Journal of advances in informationtechnology. 2010 Feb;1(1):4-20.[53] Miao YQ, Kamel M. Pairwise optimized Rocchio algorithm for textcategorization. Pattern Recognition Letters. 2011 Jan 15;32(2):375-82.[54] Li X, Liu B. Learning to classify texts using positive and unlabeled data.InIJCAI 2003 Aug 9 (Vol. 3, No. 2003, pp. 587-592).[55] Joachims T. A Probabilistic Analysis of the Rocchio Algorithm with TFIDFfor Text Categorization. Carnegie-mellon univ pittsburgh pa dept of computerscience; 1996 Mar.[56] Dudani SA.
The distance-weighted k-nearest-neighbor rule. IEEE Transactionson Systems, Man, and Cybernetics. 1976 Apr(4):325-7.120[57] Manning C. D. , Raghavan P. , Schütze H. Introduction to information retrieval.Cambridge: Cambridge university press; 2008 Jul 12.[58] Song J, Su F, Tai CL, Cai S.
An object-oriented progressive-simplificationbased vectorization system for engineering drawings: model, algorithm, andperformance. IEEE transactions on pattern analysis and machine intelligence.2002 Aug;24(8):1048-60.[59] Daniel J. , James H. M. Speech and Language processing. ComputationalLinguistics, and Speech Recognition, UK: Prentice-Hall Inc, 2000pp. 2000:22105.[60] Tan S. An effective refinement strategy for KNN text classifier. Expert Systemswith Applications. 2006 Feb 28;30(2):290-8.[61] Yang Y, Liu X.
A re-examination of text categorization methods. InProceedingsof the 22nd annual international ACM SIGIR conference on Research anddevelopment in information retrieval 1999 Aug 1 (pp. 42-49). ACM.[62] Gayathri K, Marimuthu A. Text document pre-processing with the KNN forclassification using the SVM. InIntelligent Systems and Control (ISCO), 20137th International Conference on 2013 Jan 4 (pp. 453-457). IEEE.[63] Cortes C, Vapnik V. Support vector machine. Machine learning. 1995Sep;20(3):273-97.[64] Ng A.
Stanford CS229 Lecture notes. Support Vector Machine.[65] Tong S, Koller D. Support vector machine active learning with applications totext classification. Journal of machine learning research. 2001;2(Nov):45-66.[66] ЛифшицЮ.Методопорныхhttp://logic.pdmi.ras.ru/ yura/internet/07ia.pdf. 2006.векторов.URL:[67] Гольштейн, Евгений Григорьевич, and Николай Владимирович Третьяков.
"Модифицированные функции Лагранжа. Теория и методы оптимизации."(1989).121[68] Joachims T. Text categorization with support vector machines: Learning withmany relevant features. Machine learning: ECML-98. 1998:137-42.[69] Kivinen J., Warmuth M. K. The perceptron algorithm vs. winnow: linear vs.logarithmic mistake bounds when few input variables are relevant.
InProceedingsof the eighth annual conference on Computational learning theory 1995 Jul 5 (pp.289-296). ACM.[70] Hosmer Jr DW, Lemeshow S, Sturdivant RX. Applied logistic regression. JohnWiley & Sons; 2013 Apr 1.[71] Kleinbaum, David G., and Mitchel Klein. "Analysis of matched data usinglogistic regression."Logistic regression. Springer New York, 2010. 389-428.[72] Gold S, Rangarajan A. Softmax to softassign: Neural network algorithms forcombinatorial optimization. Journal of Artificial Neural Networks. 1996 Aug1;2(4):381-99.[73] Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incompletedata via the EM algorithm. Journal of the royal statistical society. Series B(methodological).
1977 Jan 1:1-38.[74] Chen Z, Kulperger R, Jiang L. Jensen’s inequality for g-expectation: part 1.Comptes Rendus Mathematique. 2003 Dec 1;337(11):725-30.[75] Rabiner L, Juang B. An introduction to hidden Markov models. ieee asspmagazine. 1986 Jan;3(1):4-16.[76] Ramage D. Hidden Markov models fundamentals. Lecturehttp://cs229.stanford.edu/section/cs229-hmm.pdf. 2007 Dec 1.Notes.[77] Ito K, Kunisch K.
Augmented Lagrangian formulation of nonsmooth, convexoptimization in Hilbert spaces. Lecture Notes in Pure and Applied Mathematics.Control of Partial Differential Equations and Applications. 1995 Sep 20;174:10717.122[78] Eddy SR. Hidden markov models. Current opinion in structural biology. 1996Jun 1;6(3):361-5.[79] Yu SZ, Kobayashi H.
An efficient forward-backward algorithm for anexplicit-duration hidden Markov model. IEEE signal processing letters. 2003Jan;10(1):11-4.[80] Forney GD. The viterbi algorithm. Proceedings of the IEEE. 1973Mar;61(3):268-78.[81] Bellman, Richard. Dynamic programming. Courier Corporation, 2013.[82] McLachlan G, Krishnan T. The EM algorithm and extensions. John Wiley andSons; 2007 Nov 9.[83] Devijver PA. Baum’s forward-backwardRecognition Letters. 1985 Dec 1;3(6):369-73.algorithmrevisited.Pattern[84] Dumais ST. Latent semantic analysis. Annual review of information science andtechnology. 2004 Jan 1;38(1):188-230.[85] Bingham E, Mannila H.















