As mentioned in part 1— the most important thing:) — I went through all the titles of NeurIPS 2020 papers (more than 1900!) and read abstracts of 175 papers, and extracted DL engineer relevant insights from the following papers.
This is part 2. See the part 1 below.
Using other datasets to better solve the target dataset is ubiquitous in deep learning practice. It could be supervised pre-training (Classification; ImageNet pre-trained) or self-supervised pre-training (SimCLR on unlabeled data) or self-training.
(Self-training is a process where an intermediate model (teacher model), which is trained on target dataset, is used to create…
Advances in Deep Learning research are of great utility for a Deep Learning engineer working on real-world problems as most of the Deep Learning research is empirical with validation of new techniques and theories done on datasets that closely resemble real-world datasets/tasks (ImageNet pre-trained weights are still useful!).
But, churning a vast amount of research to acquire techniques, insights, and perspectives that are relevant to a DL engineer is time-consuming, stressful, and not the least overwhelming.
Knowledge Distillation is a process where a smaller/less complex model is trained to imitate the behavior of a larger/more complex model.
Particularly when deploying NN models on mobiles or edge devices, Pruning, and model compression in general, is desirable and often the only plausible way to deploy as the memory and computational budget of these devices is very limited.
Why not use potential infinite virtual memory and computational power from cloud machines? …
Neural network quantization is a process of reducing the precision of the weights in the neural network, thus reducing the memory, computation, and energy bandwidths.
Particularly when deploying NN models on mobile or edge devices, quantization, and model compression in general, is desirable and often the only plausible way to deploy a mobile model as the memory and computational budget of these devices is very limited.
Why not use potential infinite virtual memory and computational power from cloud machines? While a lot of NN models are running on the cloud even now, latency is not low enough for mobile/edge devices…
Neural Network (NN) Pruning is a task of reducing the size of a Neural Network by removing some of its parameters/weights.
Pruning is often performed with the objective of reducing the memory, computational, and energy bandwidths required for training and deploying NN models which are notorious for their large model size, computational expense, and energy consumption.
Particularly when deploying NN models on mobiles or edge devices, Pruning, and model compression in general, is desirable and often the only plausible way to deploy as the memory, energy, and computational bandwidths are very limited.
But, one can ask, why not use potentially…
In 1988, Jerry Fodor leveled a concern against connectionist models (deep learning) explaining/modeling human language understanding and cognition that they are not systematic. Meant to say that some of the data points (like sentences) are systematically similar to some of the other data points and humans can understand all of these data points given that they understand one data point.
For example, if we understand a sentence ‘John loves Mary’, we would also understand ‘Mary loves John’ or for that matter, any sentence of the pattern ‘NP Vt NP’ because, the underlying knowledge (concepts?) in understanding all these sentences is…
Computer Vision is a scientific endeavor that aims to automate the human visual system, not necessarily aims to imitate it but emulate all the ability of the human visual system and beyond. So, as is the case with any field which has the potential to change the course of humanity, Computer vision has a lot of history made up of the likes of obsessive human effort, hard problems, victories, failures, and immense hope.
In spite of the complexity of visual scenes in the world around us, Computer vision is now capable of detecting practically any type of object including people…
A relationship created everything to enable itself. A relationship created humanity to enable itself. A relationship created me to enable itself.
As I grow older, shedding things that could be replaced, I tend to hold near to my heart the things that endure and last.
What is the meaning of things?- The answer answered the meaning of things, but the question remained; and went recursively inward to find an answer that is itself recursive- the answer that lasts.
Self-pity of not having things gloomed the prospects of my living. I didn’t write. I was used to not having things I…
NeurIPS is a great conference attracting the state of the art in almost every aspect of machine learning research.
There are a few things that a researcher in the field should, for sure, give attention to in a conference. In my perspective, those things would cluster research articles similar to these groups: 1. Understanding, 2. Essentials, 3. Progress, 4. Big problems/future
So, I grouped all the things that I find influential in to these categories. These are posters from the NeurIPS 2018 Tue Poster Sessions A&B with their abstracts.
Despite a very rich research activity leading to numerous interesting…
I decided to read all the abstracts from NIPS/NeurIPS 2018. But it turned out to be implausible, both physically and mentally, in the time frame I wanted. There are 1011 accepted papers for this year conference including 30 orals, 168 spotlights and 813 posters out of 4856 papers with an acceptance rate of 20.8%.(source)
I wanted to read all the abstracts in 24 waking hours I could get before the conference starts. I got 1440 mins to read 1011 abstracts, having an average time of 1.42 mins. …
Storyteller of art and science. Cognitive Psychology. Artificial Intelligence. Cognitive Neuroscience.