# Tutorials

## Introduction to Bright Wire

Project overview and quick guide to getting started.

## Classification Overview with Bright Wire

Training Naive Bayes, Decision Tree, Random Forest, KNN, Multinomial Logistic Regression and Neural Network classifiers on the Iris data-set.

## Generating Text with Markov Chains

Building a Markov Model from source text and using it to generate new text.

## Recognising Handwritten Digits (MNIST)

Training a vanilla feed forward Neural Network on images of handwritten digits.

## Sentiment Analysis

Learning to classify sentences as containing either positive or negative sentiment with Naive Bayes and Neural Networks.

## Text Clustering Four Ways

Finding clusters of related documents with four different techniques - K Means, NNMF, Random Projections and SVD.

## Teaching a Recurrent Neural Net Binary Addition

Getting a neural net to learn the rules of binary addition and how to use its memory to store carry bits as appropriate.

## GRU Recurrent Neural Networks

More complicated sequences call for more complicated neural networks. This tutorial shows how to use a GRU recurrent neural network to learn the Embedded Reber Grammar.

## Sequence to Sequence with LSTM

Using different recurrent neural network architectures for classifying sequential inputs such as one to many, many to one and sequence to sequence with Long Short Term Memory (LSTM)

## Convolutional Neural Networks

Learning to recognise handwritten digits (MNIST) with convolutional neural networks gives a higher classification accuracy (and a longer training time)

## Extending Bright Wire: Custom Activation Function

Bright Wire is designed to be easily extended. This tutorial shows how to create and use a SELU activation function that can be used to train deep feed forward neural networks along with batch normalisation.