Natural Language Processing NanoDegree earned by MoniGarr

MoniGarr earned the Natural Language Processing Nanodegree on February 8th, 2022 from the Udacity Nano Degree program.

Syllabus:

https://www.udacity.com/course/natural-language-processing-nanodegree–nd892

  • Intro to Natural Language Processing
    • Project: Part of Speech Tagging
    • Text Processing
    • Spam Classifier with Naive Bayes
    • Part of Speech Tagging with HMMs
    • IBM Watson Bookworm Lab
    • Jobs in NLP
  • Computing with Natural Language
    • Project: Machine Translation
    • Feature extraction & embeddings
    • Topic Modeling
    • Sentiment Analysis
    • Sequence to Sequence
    • Deep Learning Attention
    • RNN Keras Lab
  • Communicating with Natural Language
    • Project: DNN Speech Recognizer
    • Intro to Voice User Interfaces
    • Alexa History Skill
    • Speech Recognition

project FEEDBACK: pART OF SPEECH TAGGING

Reviewer Note

Great work! Congratulations on meeting all requirements of Rubric and your work shows your effort and understanding of concepts 🎉 🎉 You did a great job and should be proud of yourself.

After reviewing this submission, I am impressed and satisfied with the effort and understanding put in to make this project a success. All the requirements have been met successfully 💯%

project FEEDBACK: MACHINE TRANSLATION

Reviewer Note

Greetings Student, This was a good implementation and I congratulate you for passing all rubric items with this submission. It was delightful reviewing your work as it was well-thought-out. I encourage you to keep up the good work. Way to go!

project FEEDBACK: DNN SPEECH RECOGNIZER

Reviewer Note

Congrats on completing this capstone! You did an amazing job! Keep it up!

Great architecture and overall design. Your performance is stellar! I would have trained even more to see how low you can converge.

Step 1: Trained Model 0: The submission trained the model for at least 20 epochs, and none of the loss values in model_0.pickle are undefined. The trained weights for the model specified in simple_rnn_model are stored in model_0.h5.

Step 2: Model 1: RNN + TimeDistributed Dense

Trained Model 1: The submission trained the model for at least 20 epochs, and none of the loss values in model_1.pickle are undefined. The trained weights for the model specified in rnn_model are stored in model_1.h5.

Completed rnn_model Module: The submission includes a sample_models.py file with a completed rnn_model module containing the correct architecture.

Step 2: Model 2: CNN + RNN + TimeDistributed Dense: 

Com̱pleted cnn_rnn_model Module: The submission includes a sample_models.py file with a completed cnn_rnn_model module containing the correct architecture.

Trained Model 2: The submission trained the model for at least 20 epochs, and none of the loss values in model_2.pickle are undefined. The trained weights for the model specified in cnn_rnn_model are stored in model_2.h5.

STEP 2 Model 3: Deeper RNN _ TimeDistributed Dense: The submission includes a sample_models.py file with a completed deep_rnn_model module containing the correct architecture.

Trained Model 3: The submission trained the model for at least 20 epochs, and none of the loss values in model_3.pickle are undefined. The trained weights for the model specified in deep_rnn_model are stored in model_3.h5.

STEP 2: Model 4: Bidirectional RNN + TimeDistributed Dense: 

Completed bidirectional_rnn_model Module: The submission includes a sample_models.py file with a completed bidirectional_rnn_model module containing the correct architecture.

Trained Model 4: The submission trained the model for at least 20 epochs, and none of the loss values in model_4.pickle are undefined. The trained weights for the model specified in bidirectional_rnn_model are stored in model_4.h5.

STEP 2: Compare the Models:

Question 1: The submission includes a detailed analysis of why different models might perform better than others.

STEP 2: Final Model:

Trained Final Model: The submission trained the model for at least 20 epochs, and none of the loss values in model_end.pickle are undefined. The trained weights for the model specified in final_model are stored in model_end.h5.

Completed final_model Module: The submission includes a sample_models.py file with a completed final_model module containing a final architecture that is not identical to any of the previous architectures.

Question 2: Great architecture and overall design. Your performance is stellar! I would have trained even more to see how low you can converge. The submission includes a detailed description of how the final model architecture was designed.