Skip to main content

Year 2016 in review and goals for 2017

Hello people,
It's my 34th birthday today and I wanted to put the past year in review and where I wanted my life to go in the next year.

Achievements of this year:
  1. Machine learning course by Andrew Ng ( completed )
  2. Calculus I by Robert Ghrist ( completed )
  3. Calculus II by Robert Ghrist ( completed )
  4. Probability and Statistics ( 2 weeks left )
  5. Data science - pandas ( 1 week done
  6. My first linear regression program Built a neural network from scratch 
  7. My first Regex. From scratch. No references. With tests.
  8. Algorithms I by Robert Sedgwick ( only audit )
  9. Algorithms II by Robert Sedgwick ( only audit )

Also did the BE subjects for CS, all the stuff I had learned over the years.

I am super happy to know that MOOCS help a lot in career advancement.

Self-help books that really helped:
  1. How to win friends and influence people - Dale Carnegie 
  2. A mind for Numbers - Barbara Oakley 

Altogether, a pretty good year, where learning is concerned. Things/Tips that helped me while learning:
  1. Very accommodating parents and girlfriend.
  2. Whenever I feel overwhelmed with the amount of unindexed information, I take a mental break for a few days. It helps the brain index things. I believe regular meditation helps this tremendously - I will try this the coming year.
  3. Take time after every study period to put things in order mentally.
  4. Write down things. It makes linking things easier and I have a notebook as a quick reference. Don't stick to a certain style of note taking, experiment. It's my notebook.
  5. Keep going back to my saved videos and notes whenever I felt the subject is slipping away or I just 'don't get it'. In this regard, I think Evernote or one-note is pretty awesome for note taking.

The way forward:

Life can and does happen, and has very little regard for me or my plans. But I have learned this year that if I have a plan, and if life doesn't happen, then I know what to do. 

So here's what I want to do in the year 2017:

Long term goals ( to finish in this order ): 4 hours every day.
  1. finish my 'CS engineering at home' course that I am doing now. Till Feb end
  2. finish the book " Elements of Statistical Learning ". Till October end
  3. Finish learning all the statistical packages in Python. Till October end. I'll be doing this in parallel with the statistics book
  4. Finish the course "probabilistic graphical models" on Coursera. This will stretch into January 2018.

Small things ( 1 hour every day )
  1. Summarize one research paper a day.
  2. Do one programming problem a day.

All work and no play make me a dull, Ill boy:
  1. Exercise every morning. 45 mins a day
  2. Cook on weekends. 
  3. Read some philosophy. 20 mins every day.

Life goals:
This is the part of my goals that will make me a better and more interesting person. Here goes 
  1. Time management. I am losing up to an hour everyday procrastinating. I need to stop it.
  2. Learn to ride a motorbike. And car.
  3. Home management. I am not at all good at it.
  4. Take 2 week long holidays. See a bit of my beautiful country.
  5. Wear proper formal clothes when going to work. Enough of college wear.

I will need to structure my days a little. Time management will be a bitch. I get 24-(7+10)=7 hours to myself. So I have to get a lot out of it.  After marking times on the list above, I do get half an hour to fool around. Or meditate. Which is supposed to be just as good as taking a vacation. And gives me a boost in IQ.

Comments

Popular posts from this blog

Markov chain in JavaScript

I made a small Markov Chain joke generator during my coffee break sometime last week. This is in continuation to the last post, where we did a similar thing. I did this specifically to see how well it could be extended in a language which I have typically not used before for ML/NLP.

Let me run you guys through it.
First of all, the Markhov Chains need a bunch of data to tell it how exactly you want your sentences constructed.


str_arr=[sentence1, sentence2,...]

Next, we create a dictionary of all trigrams present across the sentences. To do this, we use all bigrams as keys, and the succeeding word as the corresponding values. The key-value pairs thus form a trigram. As an example, consider the sentence : “The man had a dog.” The dictionary for this sentence will have :
[ {[The, man] : [had]}, {[man, had] : [a]}, {[had, a] : [dog]} ]
Next up, using the dictionary that we just made to create sentences. Here we provide the first two words, and let the function work its magic to complete the sen…

Yo mama so geeky : generating jokes using Markov Chains

A few days back, I saw this article “How to fake a sophisticated knowledge of Wine with Markov Chains” on the programming subreddit. To my utter delight, the article referenced the code, along with a very detailed explanation, so I spent an hour getting it all to work. The hour taken was no fault of the original authors, it was taken because I wanted to get a good hang of XPath, which will be the topic of a later post. The program auto-generates wine reviews, by using Markov Chains to come up with a sequence of most probable trigrams. It was great fun spouting my expert-level sommelier reviews, especially considering that I can count on one hand the number of times I have actually tasted wine! The reviews were just the right amount of ambiguous with a hint of snobbishness (which, according to me, just made the whole thing perfectly more believable). While I was showing off my new-found expertise in wines, my partner in crime, Rupsa, told me it could probably be used for other similar…