“Machine learning” - course 30,000 rubles. from MSU, training 3 weeks. (1 month), Date: November 30, 2023.
Miscellaneous / / December 02, 2023
Purpose of the program – introduce students to the basics of machine learning.
Duration of training – 72 hours (30 hours of classroom lessons with a teacher, 42 hours of independent study of materials).
Form of study – full-time, part-time, evening.
Class format - full-time, for participants from other cities, if it is impossible to attend in person, you will be able to connect to the lesson via video conference.
Cost of education - 30,000 rubles.
Start of classes - autumn 2023.
Training agreements are concluded with individuals and legal entities.
Registration for courses is carried out by email [email protected], using the registration form on the website.
You can contact the course administrator, Anton Martyanov, to register or with questions via WhatsApp or Telegram: +79264827721.
Doctor of Technical Sciences Position: Professor of the Higher School of Management and Innovation of M.V. Lomonosov Moscow State University
Section 1. Introduction. Examples of tasks. Logical methods: decision trees and decision forests.
Logical methods: classification of objects based on simple rules. Interpretation and implementation. Combination into a composition. Decisive trees. Random forest.
Section 2. Metric classification methods. Linear methods, stochastic gradient.
Metric methods. Classification based on similarity. Distance between objects. Metrics. The k-nearest neighbors method. Generalization to regression problems using kernel smoothing. Linear models. Scalability. Applicability to big data Stochastic gradient method. Applicability for tuning linear classifiers. The concept of regularization. Features of working with linear methods. Classification quality metrics.
Section 3. Support Vector Machine (SVM). Logistic regression. Classification quality metrics.
Linear models. Scalability. Applicability to big data Stochastic gradient method. Applicability for tuning linear classifiers. The concept of regularization. Features of working with linear methods.
Section 4. Linear regression. Dimensionality reduction, principal component method.
Linear models for regression. Their connection with the singular decomposition of the “objects-features” matrix. Reducing the number of signs. Approaches to feature selection. Principal component method. Dimensionality reduction methods.
Section 5. Compositions of algorithms, gradient boosting. Neural networks.
Combining models into a composition. Mutual correction of model errors. Basic concepts and problem statements related to compositions. Gradient boosting.
Neural networks. Search for nonlinear dividing surfaces. Multilayer neural networks and their tuning using the backpropagation method. Deep neural networks: their architectures and features.
Section 6. Clustering and visualization.
Problems of unsupervised learning. Finding structure in data. The clustering problem is the task of finding groups of similar objects. The visualization task is the task of mapping objects into two- or three-dimensional space.
Section 7. Applied data analysis problems: formulations and solution methods.
Partial learning as a problem between supervised learning and clustering. A sampling problem in which the value of the target variable is known only for some objects. The difference between the partial learning problem and the previously discussed formulations. Approaches to solution.
Analysis of problems from applied areas: scoring in banks, insurance, underwriting problems, pattern recognition problems.
Address
119991, Moscow, st. Leninskie Gory, 1, bldg. 51, 5th floor, room 544 (Dean's office)
University