Диссертация (1137108), страница 24
Текст из файла (страница 24)
T., Dosovitskiy A., Brox T., Riedmiller M. Striving for simplicity: The all convolutional net // International Conference on Learning Representations Workshop. — 2015.87. Radford A., Metz L., Chintala S. Unsupervised representation learning with deep convolutionalgenerative adversarial networks // International Conference on Learning Representations. — 2016.88. Xie S., Girshick R., Dollár P., Tu Z., He K. Aggregated residual transformations for deep neuralnetworks // Conference on Computer Vision and Pattern Recognition. — 2017.89. Ioannou Y., Robertson D., Cipolla R., Criminisi A.
Deep roots: Improving CNN efficiency withhierarchical filter groups // Conference on Computer Vision and Pattern Recognition. — 2017.90. LeCun Y., Bottou L., Bengio Y., Haffner P. Gradient-based learning applied to document recognition // Proceedings of the IEEE. — 1998. — Vol. 86, no. 11.
— P. 2278–2324.91. Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., Erhan D., Vanhoucke V., Rabinovich A. Going deeper with convolutions // Proceedings of the IEEE conference on computervision and pattern recognition. — 2015. — P. 1–9.92. He K., Zhang X., Ren S., Sun J. Spatial pyramid pooling in deep convolutional networks for visualrecognition // IEEE transactions on pattern analysis and machine intelligence. — 2015. — Vol. 37,no. 9. — P.
1904–1916.93. Ioffe S., Szegedy C. Batch normalization: Accelerating deep network training by reducing internalcovariate shift // International Conference on Machine Learning. — 2015. — P. 448–456.94. Ioffe S. Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-NormalizedModels // arXiv. — 2017.95. Ba J.
L., Kiros J. R., Hinton G. E. Layer normalization // Advances in Neural Information Processing Systems Deep Learning Symposium. — 2016.96. Deng J., Dong W., Socher R., Li L.-J., Li K., Fei-Fei L. ImageNet: A large-scale hierarchical imagedatabase // Conference on Computer Vision and Pattern Recognition. — 2009.10897. Russakovsky O. [et al.]. ImageNet Large Scale Visual Recognition Challenge // International Journal of Computer Vision. — 2015.98. Тихонов А. Н., Арсенин В. Я.
Методы решения некорректных задач. — 1979.99. Dai J., Li Y., He K., Sun J. R-FCN: Object Detection via Region-based Fully Convolutional Networks // Advances in Neural Information Processing Systems. — 2016.100.Mnih A., Gregor K. Neural Variational Inference and Learning in Belief Networks // InternationalConference on Machine Learning. — 2014. — P. 1791–1799.101.Gu S., Levine S., Sutskever I., Mnih A. MuProp: Unbiased backpropagation for stochastic neuralnetworks // International Conference on Learning Representations. — 2016.102.Mnih A., Rezende D. Variational inference for monte carlo objectives // International Conferenceon Machine Learning. — 2016.
— P. 2188–2196.103.Kingma D. P., Welling M. Auto-encoding variational bayes // International Conference on LearningRepresentations. — 2014.104.Rezende D. J., Mohamed S., Wierstra D. Stochastic backpropagation and approximate inferencein deep generative models // ICML. — 2014.105.Ruiz F.
J., Titsias M. K., Blei D. M. The generalized reparameterization gradient // Advances inNeural Information Processing Systems. — 2016. — P. 460–468.106.Naesseth C., Ruiz F., Linderman S., Blei D. Reparameterization Gradients through AcceptanceRejection Sampling Algorithms // Artificial Intelligence and Statistics. — 2017.
— P. 489–498.107.Hubara I., Courbariaux M., Soudry D., El-Yaniv R., Bengio Y. Binarized Neural Networks // /ed. by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett. — Curran Associates, Inc.,2016. — P. 4107–4115. — URL: http://papers.nips.cc/paper/6573-binarized-neural-networks.pdf.108.Bengio Y., Léonard N., Courville A. Estimating or propagating gradients through stochastic neurons for conditional computation // arXiv. — 2013.109.Raiko T., Berglund M., Alain G., Dinh L.
Techniques for learning binary stochastic feedforwardneural networks // International Conference on Learning Representations. — 2015.110. Chung J., Ahn S., Bengio Y. Hierarchical multiscale recurrent neural networks // InternationalConference on Learning Representations. — 2017.111. Kočiský T., Melis G., Grefenstette E., Dyer C., Ling W., Blunsom P., Hermann K. M. SemanticParsing with Semi-Supervised Sequential Autoencoders // Proceedings of the Conference onEmpirical Methods in Natural Language Processing (EMNLP). — 2016. — URL: https://arxiv.org/abs/1609.09315.112.
Jang E., Gu S., Poole B. Categorical Reparameterization with Gumbel-Softmax // InternationalConference on Learning Representations. — 2017.113. Maddison C. J., Mnih A., Teh Y. W. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables // International Conference on Learning Representations.
— 2017.109114. Maddison C. J., Tarlow D., Minka T. A* sampling // Advances in Neural Information ProcessingSystems. — 2014. — P. 3086–3094.115. Misailovic S., Sidiroglou S., Hoffmann H., Rinard M. Quality of service profiling // ICSE. — 2010.116. Misailovic S., Roy D. M., Rinard M. C. Probabilistically accurate program transformations // StaticAnalysis. — 2011.117. Sidiroglou-Douskos S., Misailovic S., Hoffmann H., Rinard M. Managing Performance vs. Accuracy Trade-offs With Loop Perforation // ACM SIGSOFT. — 2011.118. Samadi M., Jamshidi D.
A., Lee J., Mahlke S. Paraprox: Pattern-based approximation for dataparallel applications // ASPLOS. — 2014.119. Krizhevsky A. cuda–convnet2 / https://github.com/akrizhevsky/cuda-convnet2/. — 2014.120.Chetlur S., Woolley C., Vandermersch P., Cohen J., Tran J., Catanzaro B., Shelhamer E. cuDNN:Efficient Primitives for Deep Learning // arXiv. — 2014.121.Ovtcharov K., Ruwase O., Kim J.-Y., Fowers J., Strauss K., Chung E. S.
Accelerating Deep Convolutional Neural Networks Using Specialized Hardware // Microsoft Research Whitepaper. —2015.122.Courbariaux M., Bengio Y., David J. Low precision arithmetic for deep learning // InternationalConference on Learning Representations. — 2015.123.Gupta S., Agrawal A., Gopalakrishnan K., Narayanan P. Deep Learning with Limited NumericalPrecision // International Conference on Machine Learning. — 2015.124.Denton E. L., Zaremba W., Bruna J., LeCun Y., Fergus R.
Exploiting Linear Structure WithinConvolutional Networks for Efficient Evaluation // Advances in Neural Information ProcessingSystems. — 2014.125.Jaderberg M., Vedaldi A., Zisserman A. Speeding up convolutional neural networks with low rankexpansions // BMVC. — 2014.126.Lebedev V., Ganin Y., Rakhuba M., Oseledets I., Lempitsky V. Speeding-up Convolutional NeuralNetworks Using Fine-tuned CP-Decomposition // International Conference on Learning Representations. — 2015.127.Zhang X., Zou J., He K., Sun J. Accelerating very deep convolutional networks for classification and detection // IEEE transactions on pattern analysis and machine intelligence.
— 2016. —Vol. 38, no. 10. — P. 1943–1955.128.Graham B. Spatially-sparse convolutional neural networks // arXiv. — 2014.129.Lebedev V., Lempitsky V. Fast convnets using group-wise brain damage // Conference on ComputerVision and Pattern Recognition. — 2016.130.Collins M. D., Kohli P. Memory Bounded Deep Convolutional Networks // arXiv. — 2014.131.Novikov A., Podoprikhin D., Osokin A., Vetrov D. Tensorizing Neural Networks // Advances inNeural Information Processing Systems.
— 2015.110132.Yang Z., Moczulski M., Denil M., Freitas N. de, Smola A. J., Song L., Wang Z. Deep Fried Convnets // International Conference on Computer Vision. — 2015.133.Graham B. Fractional Max-Pooling // arXiv. — 2014.134.Chen T. Matrix Shadow library / https://github.com/dmlc/mshadow. — 2015.135.Krizhevsky A., Hinton G. Learning multiple layers of features from tiny images // Computer Science Department, University of Toronto, Tech. Rep. — 2009.136.Caffe Model Zoo / http://caffe.berkeleyvision.org/model_zoo.html.137.Ba J., Salakhutdinov R., Grosse R., Frey B.
Learning wake-sleep recurrent attention models //Advances in Neural Information Processing Systems. — 2015.138.Almahairi A., Ballas N., Cooijmans T., Zheng Y., Larochelle H., Courville A. Dynamic CapacityNetworks // International Conference on Machine Learning. — 2016.139.Liao Q., Poggio T. Bridging the Gaps Between Residual Learning, Recurrent Neural Networksand Visual Cortex // arXiv. — 2016.140.Greff K., Srivastava R., Schmidhuber J. Highway and Residual Networks learn Unrolled IterativeEstimation // International Conference on Learning Representations. — 2017.141.Xingjian S., Chen Z., Wang H., Yeung D.-Y., Wong W.-k., Woo W.-c. Convolutional LSTM network:A machine learning approach for precipitation nowcasting // Advances in Neural Information Processing Systems. — 2015.142.Li Z., Gavves E., Jain M., Snoek C.
G. Videolstm convolves, attends and flows for action recognition // arXiv. — 2016.143.Lin T.-Y., Maire M., Belongie S., Hays J., Perona P., Ramanan D., Dollár P., Zitnick C. L. Microsoftcoco: Common objects in context // European Conference on Computer Vision. — 2014.144.Huang G., Sun Y., Liu Z., Sedra D., Weinberger K. Deep Networks with Stochastic Depth // European Conference on Computer Vision.
— 2016.145.Larochelle H. My notes on Adaptive Computation Time for Recurrent Neural Networks. —2016. — URL: https://goo.gl/QxBucH.146.Huang G., Liu Z., Weinberger K. Q. Densely connected convolutional networks // Conference onComputer Vision and Pattern Recognition. — 2017.147.Han S., Mao H., Dally W. J. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding // International Conference on Learning Representations. — 2016.148.Szegedy C., Ioffe S., Vanhoucke V.
Inception-v4, inception-resnet and the impact of residual connections on learning // arXiv. — 2016.149.Li H., Lin Z., Shen X., Brandt J., Hua G. A convolutional neural network cascade for face detection // Conference on Computer Vision and Pattern Recognition. — 2015.111150.Yang F., Choi W., Lin Y.
Exploit all the layers: Fast and accurate cnn object detector with scale dependent pooling and cascaded rejection classifiers // Conference on Computer Vision and PatternRecognition. — 2016.151.Teerapittayanon S., McDanel B., Kung H. BranchyNet: Fast Inference via Early Exiting from DeepNeural Networks // ICPR. — 2016.152.Bylinskii Z., Judd T., Borji A., Itti L., Durand F., Oliva A., Torralba A. MIT Saliency Benchmark /http://saliency.mit.edu/.153.Bylinskii Z., Judd T., Oliva A., Torralba A., Durand F.
What do different evaluation metrics tellus about saliency models? // arXiv. — 2016.154.Kruthiventi S. S., Ayush K., Babu R. V. Deepfix: A fully convolutional neural network for predictinghuman eye fixations // arXiv. — 2015.155.Neumann M., Stenetorp P., Riedel S. Learning to Reason with Adaptive Computation // NIPSWorkshop on Interpretable Machine Learning in Complex Systems. — 2016.156.Staines J., Barber D. Variational Optimization // arXiv. — 2012.157.Staines J., Barber D.
Optimization by Variational Bounding // ESANN. — 2013.158.Sohn K., Lee H., Yan X. Learning structured output representation using deep conditional generative models // Advances in Neural Information Processing Systems. — 2015. — P. 3483–3491.159.Li Z., Yang Y., Liu X., Wen S., Xu W. Dynamic Computational Time for Visual Attention // arXiv.