Диссертация (1090484), страница 14
Текст из файла (страница 14)
Random projection in dimensionality reduction:applications to image and text data. InProceedings of the seventh ACM SIGKDDinternational conference on Knowledge discovery and data mining 2001 Aug 26(pp. 245-250). ACM.[86] Landauer, Thomas K. Latent semantic analysis. John Wiley and Sons, Ltd,2006.[87] Wiener E, Pedersen JO, Weigend AS. A neural network approach to topicspotting. InProceedings of SDAIR-95, 4th annual symposium on documentanalysis and information retrieval 1995 Apr 24 (Vol. 317, p.
332).[88] Лоули Д. Факторный анализ как статистический метод. Рипол Классик;1967.123[89] Ishii N, Murai T, Yamada T, Bao Y. Text classification by combininggrouping, LSA and kNN. InComputer and Information Science, 2006 and2006 1st IEEE/ACIS International Workshop on Component-Based SoftwareEngineering, Software Architecture and Reuse.
ICIS-COMSAR 2006. 5thIEEE/ACIS International Conference on 2006 Jul 10 (pp. 148-154). IEEE.[90] Hinton G. A practical guide to training restricted Boltzmann machines.Momentum. 2010 Aug 2;9(1):926.[91] Хайкин С. Нейронные сети: полный курс. 2-е изд. — М.: Вильямс, 2006.[92] Hinton G. E. , Salakhutdinov R. R.
Replicated softmax: an undirected topicmodel. InAdvances in neural information processing systems 2009 (pp. 16071614).[93] Yu B, Xu ZB, Li CH. Latent semantic analysis for text categorization usingneural network. Knowledge-Based Systems. 2008 Dec 31;21(8):900-4.[94] Беклемишев, Д. В.
Дополнительные главы линейной алгебры. "Наукa Главная редакция физико-математической литературы, 1983.[95] Brants T, Chen F, Tsochantaridis I. Topic-based document segmentation withprobabilistic latent semantic analysis. InProceedings of the eleventh internationalconference on Information and knowledge management 2002 Nov 4 (pp. 211-218).ACM.[96] Hofmann T. Probabilistic latent semantic indexing. InProceedings of the 22ndannual international ACM SIGIR conference on Research and development ininformation retrieval 1999 Aug 1 (pp.
50-57). ACM.[97] Воронцов К. В. Вероятностное тематическое моделирование. Москва. 2013Oct.[98] Никулин ВН. О разложении матриц при помощи метода стохастическогоградиентного спуска в приложении к задаче направляемой классификациимикрочипов. Компьютерные исследования и моделирование. 2013;5(2):13140.124[99] Akaike H. Information theory and an extension of the maximum likelihoodprinciple.
InSelected Papers of Hirotugu Akaike 1998 (pp. 199-213). SpringerNew York.[100] Gaussier E, Goutte C. Relation between PLSA and NMF and implications.InProceedings of the 28th annual international ACM SIGIR conference onResearch and development in information retrieval 2005 Aug 15 (pp. 601-602).ACM.[101] Socher R., Pennington J., Huang E. H., Ng A., Manning C. D. Semi-supervisedrecursive autoencoders for predicting sentiment distributions. InProceedings ofthe conference on empirical methods in natural language processing 2011 Jul 27(pp.
151-161). Association for Computational Linguistics.[102] Socher R, Huang EH, Pennin J, Manning CD, Ng AY. Dynamic pooling andunfolding recursive autoencoders for paraphrase detection. InAdvances in NeuralInformation Processing Systems 2011 (pp. 801-809).[103] Memisevic R, Zach C, Pollefeys M, Hinton GE. Gated softmax classification.InAdvances in neural information processing systems 2010 (pp. 1603-1611).[104] Dunne RA, Campbell NA. On the pairing of the softmax activation and crossentropy penalty functions and the derivation of the softmax activation function.InProc. 8th Aust.
Conf. on the Neural Networks, Melbourne, 181 1997 (Vol. 185).[105] Mandic DP. A generalized normalized gradient descent algorithm. IEEE SignalProcessing Letters. 2004 Feb;11(2):115-8.[106] Kim Y. Convolutional neural networks for sentence classification. arXivpreprint arXiv:1408.5882. 2014 Aug 25.[107] Kalchbrenner N, Grefenstette E, Blunsom P.
A convolutional neural networkfor modelling sentences. arXiv preprint arXiv:1404.2188. 2014 Apr 8.[108] Margarit H, Subramaniam R. A batch-normalized recurrent network forsentiment classification. Advances in Neural Information Processing Systems.2016.125[109] Liu P., Qiu X., Huang X. Recurrent neural network for text classification withmulti-task learning. arXiv preprint arXiv:1605.05101. 2016 May 17.[110] I-Ting Fang. Deep Learning for Query Semantic Domains Classification. 2016.[111] Галушкин А. И.
Теория нейронных сетей. М.: ИПРЖР; 2000.[112] Галушкин А. И., Фомин Ю.И., Нейронные сети, как линейные последовательностные машины. Из-во МАИ, 1991 г.[113] Hepner GF. Artificial neural network classification using a minimal trainingset. Comparison to conventional supervised classification. PhotogrammetricEngineering and Remote Sensing. 1990;56(4):469-73.[114] Богданов Ю. М., Галушкин А.
И., Старовойтов А. В. - Направления фундаментальных исследований в области нейросетевыхтехнологий - Информатизация и связь 2012(8):5-9.[115] Cochocki A, Unbehauen R. Neural networks for optimization and signalprocessing. John Wiley & Sons, Inc.; 1993 Jun 1.[116] Lee KY, Cha YT, Park JH. Short-term load forecasting using an artificialneural network. IEEE Transactions on Power Systems. 1992 Feb;7(1):124-32.[117] Галушкин А. И., Тюхов Б.П., Василькова Т.А., Слободенюк В.А. Анализдинамики систем распознавания нестационарных образов.
Труды МИЭМ.вып. 23, 1971 г.[118] Demuth HB, Beale MH, De Jess O, Hagan MT. Neural network design. MartinHagan; 2014 Sep 1.[119] Hornik K, Stinchcombe M, White H. Multilayer feedforward networks areuniversal approximators. Neural networks. 1989 Dec 31;2(5):359-66.[120] Demuth HB, Beale MH, De Jess O, Hagan MT. Neural network design. MartinHagan; 2014 Sep 1.126[121] Krogh A, Vedelsby J. Neural network ensembles, cross validation, and activelearning. Advances in neural information processing systems.
1995 May;7:231-8.[122] Benediktsson, Jon A., Philip H. Swain, and Okan K. Ersoy. "Neural networkapproaches versus statistical methods in classification of multisource remotesensing data."(1990).[123] Tu JV. Advantages and disadvantages of using artificial neural networksversus logistic regression for predicting medical outcomes. Journal of clinicalepidemiology. 1996 Nov 1;49(11):1225-31.[124] Patterson DW. Artificial neural networks: theory and applications.
PrenticeHall PTR; 1998 Aug 1.[125] Fausett LV. Fundamentals of neural networks. Prentice-Hall; 1994.[126] White H. Artificial neural networks: approximation and learning theory.Blackwell Publishers, Inc.; 1992 Oct 1.[127] Галушкин А. И. Синтез многослойных систем распознавания образов. —М.: «Энергия», 1974.[128] Werbos P.
J., Beyond regression: New tools for prediction and analysis in thebehavioral sciences. Ph.D. thesis, Harvard University, Cambridge, MA, 1974.[129] Leshno M, Lin VY, Pinkus A, Schocken S. Multilayer feedforward networkswith a nonpolynomial activation function can approximate any function. Neuralnetworks. 1993 Dec 31;6(6):861-7.[130] Rummelhart D. E., Hinton G. E., Williams R.
J. Learning internalrepresentations by error propagation // Vol. 1 of Computational models ofcognition and perception, chap. 8. — Cambridge, MA: MIT Press, 1986. — Pp.319–362.[131] Воронцов К. В. Математические методы обучения по прецедентам (теорияобучения машин). Москва. 2011.127[132] Murata N, Yoshizawa S, Amari SI. Network information criterion-determiningthe number of hidden units for an artificial neural network model. IEEETransactions on Neural Networks.
1994 Nov;5(6):865-72.[133] Lagaris IE, Likas A, Fotiadis DI. Artificial neural networks for solving ordinaryand partial differential equations. IEEE Transactions on Neural Networks. 1998Sep;9(5):987-1000.[134] Mandic DP. A generalized normalized gradient descent algorithm. IEEE SignalProcessing Letters. 2004 Feb;11(2):115-8.[135] Bottou L.
Large-scale machine learning with stochastic gradient descent.InProceedings of COMPSTAT’2010 2010 (pp. 177-186). Physica-Verlag HD.[136] Hinton G. E., Salakhutdinov R. R. Reducing the dimensionality of data withneural networks. science. 2006 Jul 28;313(5786):504-7.[137] Lee H., Grosse R., Ranganath R., Ng A.
Convolutional deep belief networksfor scalable unsupervised learning of hierarchical representations. InProceedingsof the 26th annual international conference on machine learning 2009 Jun 14 (pp.609-616). ACM.R in[138] Bengio Y. Learning deep architectures for AI. Foundations and trendsMachine Learning. 2009 Nov 15;2(1):1-27.[139] Королев ВЮ. ЕМ-алгоритм, его модификации и их применение к задачеразделения смесей вероятностных распределений. Теоретический обзор. М.2007.[140] Andrews N. O., Fox E. A. Recent developments in document clustering.Technical report, Computer Science, Virginia Tech; 2007 Oct 16.[141] Qi Y, Wang Y, Zheng X, Wu Z.
Robust feature learning by stacked autoencoderwith maximum correntropy criterion. InAcoustics, Speech and Signal Processing(ICASSP), 2014 IEEE International Conference on 2014 May 4 (pp. 6716-6720).IEEE.128[142] Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol PA. Stacked denoisingautoencoders: Learning useful representations in a deep network with a localdenoising criterion. Journal of Machine Learning Research.
2010;11(Dec):3371408.[143] Denny Britz, Implementing a CNN for Text Classification in TensorFlow, 2015.[144] LeCun Y. LeNet-5, convolutional neural networks. URL: http://yann. lecun.com/exdb/lenet. 2015.[145] Bengio Y., Courville A., Vincent P. Representation learning: A review and newperspectives. IEEE transactions on pattern analysis and machine intelligence.2013 Aug;35(8):1798-828.[146] Bengio Y., LeCun Y. Scaling learning algorithms towards AI. Large-scale kernelmachines. 2007 Sep;34(5):1-41.[147] Brants, Thorsten, and Alex Franz. Web 1T 5-gram Version 1 LDC2006T13.DVD.
Philadelphia: Linguistic Data Consortium, 2006.[148] Funahashi KI, Nakamura Y. Approximation of dynamical systems bycontinuous time recurrent neural networks. Neural networks. 1993 Dec31;6(6):801-6.[149] Mikolov T, Karafiát M, Burget L, Cernocký J, Khudanpur S. Recurrent neuralnetwork based language model.
InInterspeech 2010 Sep 26 (Vol. 2, p. 3).[150] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation.1997 Nov 15;9(8):1735-80.[151] Харламов А.А., Ермоленко Т.В. Автоматическое формирование неоднородной семантической сети на основе выявления ключевых предикатных структур предложений текста // Труды Международной научнотехнической конференции «Открытые семантические технологии проектирования интеллектуальных систем» (OSTIS’2012), - Минск: 2012129[152] Lee J. Y., Dernoncourt F.















