Continuing our series of active interesting community members, this time we have an interview with u/shoumikchow
Shoumik lives in Houston, Texas working on his Masters degree in Computer Science and working at the Quantitative Imaging Lab in the University of Houston. Some of his lab mates are working on person re-identification/tracking, object tracking across different cameras, video tampering, etc.His current focus is trying to understand if we can find social networks from videos. For example, if two people walk together, can we automatically deduce that they know each other.
This is the transcription of my interview with Shoumik:
What made you get into ML\CV?
Even though I knew about ML (or data science as it was called back then) a long time ago, my first real exposure to ML was relatively late. In 2016, I attended a four-day knowledge initiative in Bangladesh called KolpoKoushol which was organized by a few graduate students of top US universities. All the participants attended several talks throughout the four days and had to make a project based on data that we were given. I was part of a team that made a data visualization project but I was exposed to a lot of other teams that were doing ML projects.
After KolpoKoushol, I got in touch with a few of the attendees as well as some of the organizers to work on a long-term project. We eventually wrote a paper which was published at the Machine Learning for the Developing World workshop at NeurIPS 2018, mentored by Dr. Nazmus Saquib (then a PhD student at the MIT Media Lab) where we showed that a clique exists – or seems to exist – amongst the top political entities in Bangladesh according to data from newspapers. We also showed how the core actors in networks change over time according to the data.
My foray into CV was even more serendipitous. Right after my paper was published I was invited to a workshop on financial inclusion, organized by the Bill and Melinda Gates Foundation. I was invited to the workshop only because Dr. Saquib shared on his Facebook about the paper and Sabhanaz Rashid Diya (who was working at the Gates Foundation at the time) came upon the post. At the workshop, I met one of the co-founders of Gaze and managed to land an interview at the company. I joined Gaze with minimal experience in computer vision and had to basically learn on the job and haven’t looked back since!
What are your goals in the field? Where do you see yourself in 5 years?
I hope to advance the field of computer vision in a significant way. I also hope to use computer vision technologies to advance other fields to help humanity. AI for social good is something I am very passionate about and I am constantly trying to merge my two interests.
5 years is an eternity in this field but I hope to still be in whatever field computer vision evolves into and hopefully work at a leading AI lab.
How did you first find 2d3d?
I found out about 2d3d from the r/MachineLearning subreddit. I attended the first talk that Peter himself gave and have been attending as many talks as I could since. One notable talk I attended was by Dr. Jingdong Wang of Microsoft who talked about the HRNet paper. I had to stay up till 2:00am for it to finish but it was worth every bit.
What do you find cool\exciting about the community?
I think the community is very supportive. I also love the fact that it is open to beginners and no one is afraid to ask questions. The researchers who come to give talks are working in the cutting-edge of their fields and are very inspiring.
What cool projects have you been working on in the field?
I am currently working on my Masters thesis where we are trying to answer if we can deduce social networks among people from videos.
Another project I’ve worked on is the bbox-visualizer. This lets researchers draw bounding boxes and then labeling them easily with a stand-alone package. The code is very accessible and so I would encourage any open-source enthusiasts to contribute to the project. This would also be a good place to start for beginners who are just starting out with computer vision/open-source.
What cool tech do you see evolving and how could we use it to make society life better?
I think we’ve had a lot of very cool innovations in the computer vision field. We’ve had GANs which are able to make novel datasets to preserve privacy (check out thispersondoesnotexist.com if you haven’t already!) and a lot of improvement in medical diagnosis using computer vision. I am excited to see what these fields hold for the future.
And of course, we already have Level 2 self-driving cars like Tesla on the roads as we speak, where we have partial automation and the driver still has to monitor the roads.
Improvements in the self-driving field would also make it more accessible to more people. I expect Level 5 self-driving, where the car is capable of driving itself in any condition, to be a reality within the next 4-5 years which would reduce car accidents exponentially.
One thing I am really looking forward to is understanding the semantic meaning of images or videos. Even though computer vision models are very successful in understanding what is in a video or photo using segmentation or detection or recognition, what the images or videos mean or represent leaves a lot to be desired. I think that future isn’t too far away and I am excited to see it.
Is there any significant paper\research\project you were exposed to lately which you would like to share with the community?
One area of research that I am fascinated by are model compression models – especially the idea of the lottery tickets. This was first introduced by Jonathan Frankle and Michael Carbin in the paper The Lottery Ticket Hypothesis: Finding Sparse, Trainable Networks where they argue that there exists a subnetwork inside a larger network that is capable of being almost as good as the larger network due to the initialization of the original network. They found out that if they trained a network to completion, pruned a percentage of the trained parameters using a pruning technique, reset the remaining parameters to their initial values, and then trained the smaller network, the new network seems to perform as good as the larger network while having far fewer parameters and being less computationally expensive.
Transformers make it really easy to work with images. While the computational power required for these models are eye-watering, I expect even more research and development to make smaller models that can run on edge devices. The convergence of NLP and CV, where the SOTA for both are transformers, will definitely help propel the field to make smaller, more efficient models.