2. Wprowadzenie do Machine Learning¶
2.1. Czym jest uczenie maszynowe?¶
Machine learning was defined in 1959 by Arthur Samuel as the “field of study that gives computers the ability to learn without being explicitly programmed.” This means imbuing knowledge to machines without hard-coding it.
2.2. Co trzeba wiedzieć przed rozpoczęciem pracy?¶
2.2.1. Elementy języka Python i biblioteki standardowej¶
Libs manually installed via
2.2.3. Ecosystem Scipy¶
2.2.4. Biblioteki zewnętrzne do nauczania maszynowego¶
A set of python modules for machine learning and data mining. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy.
Simple and efficient tools for data mining and data analysis
Accessible to everybody, and reusable in various contexts
Built on NumPy, SciPy, and matplotlib
Open source, commercially usable - BSD license
TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. TensorFlow also includes TensorBoard, a data visualization toolkit.
TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google’s Machine Intelligence Research organization for the purposes of conducting machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.
PyMC3 is a Python package for Bayesian statistical modeling and Probabilistic Machine Learning which focuses on advanced Markov chain Monte Carlo and variational fitting algorithms. Its flexibility and extensibility make it applicable to a large suite of problems.
2.2.5. Biblioteki do obsługi danych¶
pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward this goal. Here are just a few of the things that pandas does well:
Easy handling of missing data (represented as NaN) in floating point as well as non-floating point data
Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the user can simply ignore the labels and let Series, DataFrame, etc. automatically align the data for you in computations
Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for both aggregating and transforming data
Make it easy to convert ragged, differently-indexed data in other Python and NumPy data structures into DataFrame objects
Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
Intuitive merging and joining data sets
Flexible reshaping and pivoting of data sets
Hierarchical labeling of axes (possible to have multiple labels per tick)
Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and saving/loading data from the ultrafast HDF5 format
Time series-specific functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.
NumPy is the fundamental package for scientific computing with Python. It contains among other things:
a powerful N-dimensional array object
sophisticated (broadcasting) functions
tools for integrating C/C++ and Fortran code
useful linear algebra, Fourier transform, and random number capabilities
Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases.
2.2.6. Math, Plots, Graphs¶
SciPy (pronounced “Sigh Pie”) is open-source software for mathematics, science, and engineering. It includes modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, ODE solvers, and more. It is also the name of a very popular conference on scientific programming with Python.
The SciPy library depends on NumPy, which provides convenient and fast N-dimensional array manipulation. The SciPy library is built to work with NumPy arrays, and provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization. Together, they run on all popular operating systems, are quick to install, and are free of charge. NumPy and SciPy are easy to use, but powerful enough to be depended upon by some of the world’s leading scientists and engineers. If you need to manipulate numbers on a computer and display or publish the results.
SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.
Matplotlib is a Python 2D plotting library which produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shell (à la MATLAB or Mathematica), web application servers, and various graphical user interface toolkits.
It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged. SciPy makes use of matplotlib.
PyDotPlus is an improved version of the old pydot project that provides a Python Interface to Graphviz’s Dot language.
Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks. It has important applications in networking, bioinformatics, software engineering, database and web design, machine learning, and in visual interfaces for other technical domains.
The Graphviz layout programs take descriptions of graphs in a simple text language, and make diagrams in useful formats, such as images and SVG for web pages; PDF or Postscript for inclusion in other documents; or display in an interactive graph browser. Graphviz has many useful features for concrete diagrams, such as options for colors, fonts, tabular node layouts, line styles, hyperlinks, and custom shapes.
The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.
Jupyter notebook is a language-agnostic HTML notebook application for Project Jupyter. In 2015, Jupyter notebook was released as a part of The Big Split™ of the IPython codebase. IPython 3 was the last major monolithic release containing both language-agnostic code, such as the IPython notebook, and language specific code, such as the IPython kernel for Python. As computing spans across many languages, Project Jupyter will continue to develop the language-agnostic Jupyter notebook in this repo and with the help of the community develop language specific kernels which are found in their own discrete repos.
2.3. Ważne pytania przed przystąpieniem do tworzenia algorytmu¶
How does this work in real world?
How much training data do you need?
How is the tree created?
What makes a good feature?
2.4. Czyszczenie Danych¶
Bardzo ważny temat
Rzadko kto o tym wspomina!
'Jana III Sobieskiego 1/2' 'ul Jana III Sobieskiego 1/2' 'ul. Jana III Sobieskiego 1/2' 'ul.Jana III Sobieskiego 1/2' 'ulicaJana III Sobieskiego 1/2' 'Ul. Jana III Sobieskiego 1/2' 'UL. Jana III Sobieskiego 1/2' 'ulica Jana III Sobieskiego 1/2' 'Ulica. Jana III Sobieskiego 1/2' 'os. Jana III Sobieskiego 1/2' 'plac Jana III Sobieskiego 1/2' 'pl Jana III Sobieskiego 1/2' 'al Jana III Sobieskiego 1/2' 'al. Jana III Sobieskiego 1/2' 'aleja Jana III Sobieskiego 1/2' 'alei Jana III Sobieskiego 1/2' 'Jana 3 Sobieskiego 1/2' 'Jana 3ego Sobieskiego 1/2' 'Jana III Sobieskiego 1 m. 2' 'Jana III Sobieskiego 1 apt 2' 'Jana Iii Sobieskiego 1/2' 'Jana IIi Sobieskiego 1/2' 'Jana lll Sobieskiego 1/2' # three small letters 'L' 'Kozia wólka 5' ...
12/12/17 2017-12-12 Dec 12, 2017 Dec 12th, 2017 12.12.2017
2.5. Praca z bibliotekami¶
2.5.1. Przykład pracy z
Import the class you plan to use
Instanciate the estimator
Estimator is the
scikit-learnword for model
Instanciate means create an object from the class
Name of the object does not matter
Can specify the tunning parameters also known as “hyperparameters” during this step
All parameters not specified are set to their defaults
Fit the model with data (aka “model training”)
Models is learning the relationship between and (features and labels)
Occurs in-place (aka change object state - mutate object)
Predict the response for a new observation
New observations are called “out-of-sample” data
Uses the information it learned during the model training process
Can predict for multiple observations at once
from sklearn.neighbours import KNeighboursClassifier() model = KNeighboursClassifier(n_neighbours=1) # Instanciate the estimator model.fit(features, labels) # Fit the model with data (aka "model training") model.predict([3, 5, 4, 2]) # Predict the response for a new observation # array() # Can predict for multiple observations at once model.predict([ [3, 5, 4, 2], [5, 4, 3, 2], ]) # array([2, 1])
from sklearn.neighbours import KNeighboursClassifier() model = KNeighboursClassifier(n_neighbours=5) model.fit(features, labels) model.predict([ [3, 5, 4, 2], [5, 4, 3, 2], ]) # array([1, 1])
from sklearn.linear_model import LogisticsRegression() model = LogisticsRegression() model.fit(features, labels) model.predict([ [3, 5, 4, 2], [5, 4, 3, 2], ]) # array([2, 0])
2.6. Kategorie algorytmów uczenia maszynowego¶
2.6.1. Supervised Learning¶
Also known as:
Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.
A model is prepared through a training process in which it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.
Example problems are classification and regression.
K najbliższych sąsiadów (ang. K Nearest Neighbors)
Regresja liniowa (ang. Linear Regression)
Support Vector Machines (SVM)
Sztuczne sieci neuronowe (ang. neural networks)
2.6.2. Unsupervised Learning¶
Also known as:
Input data is not labeled and does not have a known result.
A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may be through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.
Example problems are clustering, dimensionality reduction and association rule learning.
Klastrowanie (ang. flat clustering, hierarchical clustering)
Principal Component Analysis (PCA)
Sztuczne sieci neuronowe (ang. neural networks)
2.6.3. Semi-Supervised Learning¶
Input data is a mixture of labeled and unlabelled examples.
There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.
Example problems are classification and regression.
Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data.
połączenie obu światów
nie wszystkie dane są olabelkowane
przyszłość machine learning
ze względu na wolumen danych, nie wszystkie mogą mieć olabelkowane
man (human) in the loop:
ekspert labelkuje część danych
komputer dokonuje wstępnej analizy części danych
przedstawia iterację człowiekowi
człowiek interaktywnie poprawia i określa jakość oznaczania
komputer dokonuje kolejnej analizy