Open in app

Sign In

Write

Sign In

Ketan Doshi
Ketan Doshi

3.5K Followers

Home

About

Published in Towards Data Science

·Jun 28, 2021

Enterprise ML — Why getting your model to production takes longer than building it

A Gentle Guide to the complexities of model deployment, and integrating with the enterprise application and data pipeline. What the Data Scientist, Data Engineer, ML Engineer, and ML Ops do, in Plain English. — Let’s say we’ve identified a high-impact business problem at our company, built an ML (machine learning) model to tackle it, trained it, and are happy with the prediction results. This was a hard problem to crack that required much research and experimentation. …

Data Science

11 min read

Enterprise ML — Why getting your model to production takes longer than building it
Enterprise ML — Why getting your model to production takes longer than building it
Data Science

11 min read


Published in Towards Data Science

·Jun 17, 2021

Enterprise Machine Learning — Why building and training a “real-world” model is hard

A Gentle Guide to the lifecycle of a Machine Learning project in the Enterprise, the roles involved and the challenges of building models, in Plain English — What is Enterprise ML? What does it take to deliver a machine learning (ML) application that provides real business value to your company? Once you’ve done that and proved the substantial benefit that ML can bring to the company, how do you expand that effort to additional use cases, and really start to fulfill…

Data Science

9 min read

Enterprise ML — Why building and training a “real-world” model is hard
Enterprise ML — Why building and training a “real-world” model is hard
Data Science

9 min read


Published in Towards Data Science

·Jun 2, 2021

Transformers Explained Visually — Not Just How, but Why They Work So Well

A Gentle Guide to how the Attention Score calculations capture relationships between words in a sequence, in Plain English. — Transformers have taken the world of NLP by storm in the last few years. Now they are being used with success in applications beyond NLP as well. The Transformer gets its powers because of the Attention module. …

Deep Learning

10 min read

Transformers Explained Visually — Not just how, but Why they work so well
Transformers Explained Visually — Not just how, but Why they work so well
Deep Learning

10 min read


Published in Towards Data Science

·May 26, 2021

Batch Norm Explained Visually — Why does it work?

A Gentle Guide to the reasons for the Batch Norm layer’s success in making training converge faster, in Plain English — The Batch Norm layer is frequently used in deep learning models in association with a Convolutional or Linear layer. Many state-of-the-art Computer Vision architectures such as Inception and Resnet rely on it to create deeper networks that can be trained faster. In this article, we will explore why Batch Norm…

Neural Networks

9 min read

Batch Norm Explained Visually — Why does it work
Batch Norm Explained Visually — Why does it work
Neural Networks

9 min read


Published in Towards Data Science

·May 23, 2021

Differential and Adaptive Learning Rates — Neural Network Optimizers and Schedulers demystified

A Gentle Guide to boosting model training and hyperparameter tuning with Optimizers and Schedulers, in Plain English — Optimizers are a critical component of neural network architecture. And Schedulers are a vital part of your deep learning toolkit. During training, they play a key role in helping the network learn to make better predictions. But what ‘knobs’ do they have to control their behavior? And how can you…

Deep Learning

9 min read

Differential and Adaptive Learning Rates — Neural Network Optimizers and Schedulers demystified
Differential and Adaptive Learning Rates — Neural Network Optimizers and Schedulers demystified
Deep Learning

9 min read


Published in Towards Data Science

·May 18, 2021

Batch Norm Explained Visually — How it works, and why neural networks need it

A Gentle Guide to an all-important Deep Learning layer, in Plain English — Batch Norm is an essential part of the toolkit of the modern deep learning practitioner. Soon after it was introduced in the Batch Normalization paper, it was recognized as being transformational in creating deeper neural networks that could be trained faster. Batch Norm is a neural network layer that is…

Deep Learning

9 min read

Batch Norm Explained Visually — How it works, and why neural networks need it
Batch Norm Explained Visually — How it works, and why neural networks need it
Deep Learning

9 min read


Published in Towards Data Science

·May 9, 2021

Foundations of NLP Explained — Bleu Score and WER Metrics

A Gentle Guide to two essential metrics (Bleu Score and Word Error Rate) for NLP models, in Plain English — Most NLP applications such as machine translation, chatbots, text summarization, and language models generate some text as their output. In addition applications like image captioning or automatic speech recognition (ie. Speech-to-Text) output text, even though they may not be considered pure NLP applications.

Deep Learning

10 min read

Foundations of NLP Explained — Bleu Score and WER Metrics
Foundations of NLP Explained — Bleu Score and WER Metrics
Deep Learning

10 min read


Published in Towards Data Science

·Apr 30, 2021

Image Captions with Attention in Tensorflow, Step-by-step

An end-to-end example using Encoder-Decoder with Attention in Keras and Tensorflow 2.0, in Plain English — Generating Image Captions using deep learning has produced remarkable results in recent years. One of the most widely-used architectures was presented in the Show, Attend and Tell paper. The innovation that it introduced was to apply Attention, which has seen much success in the…

Deep Learning

11 min read

Image Captions with Attention in Tensorflow, Step-by-step
Image Captions with Attention in Tensorflow, Step-by-step
Deep Learning

11 min read


Published in Towards Data Science

·Apr 23, 2021

Image Captions with Deep Learning: State-of-the-Art Architectures

A Gentle Guide to Image Feature Encoders, Sequence Decoders, Attention, and Multi-modal Architectures, in plain English — Image Captioning is a fascinating application of deep learning that has made tremendous progress in recent years. What makes it even more interesting is that it brings together both Computer Vision and NLP. What is Image Captioning? It takes an image as input and produces a short textual summary describing the content of the…

Computer Vision

9 min read

Image Captions with Deep Learning: State-of-the-Art Architectures
Image Captions with Deep Learning: State-of-the-Art Architectures
Computer Vision

9 min read


Published in Towards Data Science

·Apr 17, 2021

Leveraging Geolocation Data for Machine Learning: Essential Techniques

A Gentle Guide to Feature Engineering and Visualization with Geospatial data, in Plain English — Location data is an important category of data that you frequently have to deal with in many machine learning applications. Location data typically provides a lot of extra context to your application’s data. For instance, you might want to predict e-commerce sales projections based…

GIS

10 min read

Leveraging Geolocation Data for Machine Learning: Essential Techniques
Leveraging Geolocation Data for Machine Learning: Essential Techniques
GIS

10 min read

Ketan Doshi

Ketan Doshi

3.5K Followers

Machine Learning and Big Data

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech