twuilliam.wordpress.com
First experiment — Vanilla MLP with Theano | IFT6266
https://twuilliam.wordpress.com/2014/02/18/first-experiment-vanilla-mlp-with-theano
First experiment — Vanilla MLP with Theano. As a first experiment, I will try to predict the next acoustic sample given a specific number of previous samples. A feedforward neural network (NN) with one hidden layer will be used to perform this task. You will find all my scripts under this repository. To generate the dataset ( generate dataset.py. And train a NN ( mlp.py. The work that I will present is mainly inspired by Hubert. The activation function of the output layer is linear instead of a softmax.
ift6266h14.wordpress.com
Apr10 – Exam | Representation Learning - ift6266h14
https://ift6266h14.wordpress.com/2014/04/03/apr10-exam/comment-page-1
Representation Learning – ift6266h14. Yoshua Bengio's graduate class on representation learning and deep learning. Apparently successful results with RNNs →. Apr10 – Exam. Posted by Yoshua Bengio. Please prepare for the final exam by reviewing the material studied during the term and the question/answers produced by all of us. Examples of exams from previous years can be found there. 15 thoughts on “ Apr10 – Exam. April 6, 2014 at 17:57. April 6, 2014 at 21:07. April 7, 2014 at 17:41. Http:/ learninglear...
ift6266h14.wordpress.com
Apr10 – Exam | Representation Learning - ift6266h14
https://ift6266h14.wordpress.com/2014/04/03/apr10-exam
Representation Learning – ift6266h14. Yoshua Bengio's graduate class on representation learning and deep learning. Apparently successful results with RNNs →. Apr10 – Exam. Posted by Yoshua Bengio. Please prepare for the final exam by reviewing the material studied during the term and the question/answers produced by all of us. Examples of exams from previous years can be found there. 15 thoughts on “ Apr10 – Exam. April 6, 2014 at 17:57. April 6, 2014 at 21:07. April 7, 2014 at 17:41. Http:/ learninglear...
twuilliam.wordpress.com
Last Experiment — Extracting features with SdA | IFT6266
https://twuilliam.wordpress.com/2014/04/30/last-experiment-extracting-features-with-sda/comment-page-1
Last Experiment — Extracting features with SdA. Phonetic information (not working…). I must have a bug in my code. 8230; but time is ticking away, so I decided to come back to my third experiment code (with only the acoustic sample). I will try a stacked denoising auto-encoder (SdA) to see if we can improve the MSE by capturing the distribution of the data. I adapted the code from this tutorial. Ran for around 35 min (SdA MLP) on a GPU. Train: 0.011868, valid 0.056046, test 0.051864. I only ran the exper...
twuilliam.wordpress.com
Last Experiment — Extracting features with SdA | IFT6266
https://twuilliam.wordpress.com/2014/04/30/last-experiment-extracting-features-with-sda
Last Experiment — Extracting features with SdA. Phonetic information (not working…). I must have a bug in my code. 8230; but time is ticking away, so I decided to come back to my third experiment code (with only the acoustic sample). I will try a stacked denoising auto-encoder (SdA) to see if we can improve the MSE by capturing the distribution of the data. I adapted the code from this tutorial. Ran for around 35 min (SdA MLP) on a GPU. Train: 0.011868, valid 0.056046, test 0.051864. I only ran the exper...
twuilliam.wordpress.com
Third experiment — Adding hidden layers | IFT6266
https://twuilliam.wordpress.com/2014/04/29/third-experiment-ahl
Third experiment Adding hidden layers. In this experiment, I will extend my previous post in which I tried to generate a specific phoneme. I made several modifications to my code (you will find the new version in this repository. The main objective was to go beyond the vanilla MLP that I used during my previous experiments by adding hidden layers. The hidden layers are stored in a list, so it is very easy to add layers (see the DBN tutorial for an extensive explanation). The other hyperparameters are:.
twuilliam.wordpress.com
First experiment — Vanilla MLP with Theano | IFT6266
https://twuilliam.wordpress.com/2014/02/18/first-experiment-vanilla-mlp-with-theano/comment-page-1
First experiment — Vanilla MLP with Theano. As a first experiment, I will try to predict the next acoustic sample given a specific number of previous samples. A feedforward neural network (NN) with one hidden layer will be used to perform this task. You will find all my scripts under this repository. To generate the dataset ( generate dataset.py. And train a NN ( mlp.py. The work that I will present is mainly inspired by Hubert. The activation function of the output layer is linear instead of a softmax.
twuilliam.wordpress.com
Some (late) initial thoughts | IFT6266
https://twuilliam.wordpress.com/2014/02/10/some-late-initial-thoughts
Some (late) initial thoughts. This year the project in IFT6266 course will focus on a unified approach to speech synthesis based on deep learning algorithms. Speech synthesis consists of mapping a text to the corresponding acoustic signals. This process is also called a text-to-speech system. According to Wikipedia. These systems first appeared in the late 50’s, which is quite old! As explained by David Krueger. Speech can be defined as an acoustic signal with melody, rhythm and harmony that is why a uni...
twuilliam.wordpress.com
Quick Experiment – Breaking the sine in one line | IFT6266
https://twuilliam.wordpress.com/2014/02/27/quick-experiment-breaking-the-sin-in-one-line/comment-page-1
Quick Experiment – Breaking the sine in one line. As we discussed today in class and as mentioned by Laurent in his post. Adding gaussian noise is important to prevent convergence to a constant. In my last post. I found this kind of behaviour which results in a flatline. Recall Laurent’s post. Is the size of the window. The estimator of the variance is the mean square error (MSE):. Is the number of training examples. Compared to my previous post. Script (go to line 380). To the .wav file):. Second Experi...
SOCIAL ENGAGEMENT