Hi people, Half the year is over and i think its good to list out things, so that i have an idea as to how i am doing with my studies ( and pretty much everything else ). It's been a wonderful and fulfilling half year, to be honest. I did a lot of things I always wanted to do. I started experimenting with hydroponics - haven't really progressed much, but I am sure I will do something substantial in the other half of the year. Benefits are a lot over traditional way, and the joy of watching your plants grow are invaluable, at the least. I read a few books on history. I have always wanted to do this , but I always had an excuse or 2 to avoid it. I finally started, and it's brought me a sense of childlike wonder, something I sorely missed. I cleaned my home! That's 20 years of procrastination right there! It was insane but I got it done. Whew! And wow! At the beginning of the year, i finished my re-study of the CS subjects. Post February, i opened th
I made a small Markov Chain joke generator during my coffee break sometime last week. This is in continuation to the last post, where we did a similar thing. I did this specifically to see how well it could be extended in a language which I have typically not used before for ML/NLP. Let me run you guys through it. First of all, the Markhov Chains need a bunch of data to tell it how exactly you want your sentences constructed. str_arr=[sentence1, sentence2,...] Next, we create a dictionary of all trigrams present across the sentences. To do this, we use all bigrams as keys, and the succeeding word as the corresponding values. The key-value pairs thus form a trigram. As an example, consider the sentence : “The man had a dog.” The dictionary for this sentence will have : [ {[The, man] : [had]}, {[man, had] : [a]}, {[had, a] : [dog]} ] Next up, using the dictionary that we just made to create sentences. Here we provide the first two words, and let the function work