Ilya opens scientists’ mind! imo, this is the most important slide and take away from his inspiring talk today at #NeurIPS2024 when trends plateau, nature looks for other species! Ilya Sutskever
There is something interesting that happens with Ilya, G Hinton, Y LeCunn, J Hopefield and all these great minds of the field...they took inspiration from Nature. They know evolution sets a well succeeded natural lab that conducted a couple of billion years of experiments. They know the logic that it is to be pursued is the logic of nature, of natural phenomena. Despite AI being a subject with ideas and a computer, it is what happens before or after this that matters. It is the outside world with its wonders that sets the background for innovation in AI. It is when you see a hummingbird choosing just a specific flower and you ask yourself "how is that possible? What is determined by genetics and what is learned in that case?"
My hero. Such an amazing human. #inspiring Biology is one system with so many expressions. Groups which help others flourish appear to be the most healthy. Symbiotic and parasitic is not so far a part at times…
That's such a pertinent question! The research and development of post-Transformer models, like Liquid Neural Networks, is crucial to unlock many new application areas for AI and to use AI in an ecologically efficient and responsible way. Thank you Ramin Hasani and the Liquid AI team for your very important contribution!
BUT.. this isn't nature. These are machines. That is all. 🦦
Business biology 🤪👍👍 very interesting post
Ramin Liquid AI seems like a platform to assist biotech I have a startup similar a need AI to perfect a beverage for health, could we talk?
Thanks for sharing…I like how Ilya looks at AI not as a tool..but an evolving intelligence…probably a hard pill to swallow for humans
I sent you a message!
Insightful indeed 👍
Generative AI|Machine Learning|Data Engineering|Software Engineering|MLOps|Azure|AWS
5d30 years ago, we dreamed of 3-grams. We combined them with bi-grams and 1-grams, fitting the interpolation parameters of these terms. The intuition was to decompose joint probabilities of sentences into conditional probabilities using Bayes' chain rule. To address data and computing power limitations, we approximated conditional probabilities with shorter contexts (tri-grams, bi-grams). I worked at the beginning of my career (30 years ago) on bi-gram class models, assuming vocabulary words belong to predefined classes. Using simulated annealing to fit classification parameters, we were surprised to find that classes had grammatical meanings. While these methods made sense at the time, they quickly reached their limits as bi-grams can't capture long contexts. New ideas have pushed progress further, and this slide shows 700-grams. I'm confident it's feasible.