AMD has released ROCm, a Deep Learning driver to run Tensorflow-written scripts on AMD GPUs. However, many owners and I have encountered many challenges in installing Tensorflow on AMD GPUs. Hence, I provided the installation instructions of Tensorflow for AMD GPUs below.

Image for post
Image for post

In this post, I guided you to install ROCm (AMD GPU driver) and Tensorflow ROCm (version compatible with AMD hardware) from Debian repository on Ubuntu. At the moment this post published, ROCm runs only on Linux-like platforms (e.g. Ubuntu, Centos). To run Deep Learning with AMD GPUs on MacOS, you can use PlaidML owned and maintained by PlaidML…


In SISR, Autoencoder and U-Net are heavily used; however, they are well-known for difficulties in training to convergence. The choice of loss function plays an important role in guiding models to optimum. Today, I introduce 2 loss functions for Single-Image-Super-Resolution.

Zhengyang Lu and Ying Chen published a U-Net model with innovative loss functions for Single-Image-Super-Resolution. Their work introduces 2 loss functions: Mean-Squared-Error (MSE) for pixel-wise comparison and Mean-Gradient-Error (MGrE) for edge-wise comparison. In this article, I will walk you through and provide code snipsets for MSE and MGrE.

Mean Squared Error (MSE)

MSE is a traditional loss function used in many Machine Learning algorithms…


For last 2 weeks, I have researched and worked on developing a Video Recommendation System. For many years, Content-based and Collaborative Filtering approaches have been heavily used in Recommendation Systems. Content-based system bases on similarity among items’ characteristics (e.g. cosine similarity) ; and Collaborative Filtering system bases on user-item interactions (e.g. Alternating Least Squares). These 2 systems have gained success in industry; however, based on my understanding, these 2 approaches are limited by large spare matrix and non-genearlization. …


I have recently worked on Computer Vision projects for classification tasks. Papers and tutorials mention Cross Entropy as the mostly used loss function to measure the difference between predictions and labels. Now, the question is what, how, and why we use Cross Entropy?

This article is based on the Classify images of clothes tutorial from Tensorflow using MNIST dataset.

Image for post
Image for post
Clothes Images of MNIST dataset

This tutorial does not focus on developing Deep Learning models. Hence, this article shows a simple Deep Learning model to classify each clothes image to one of 9 classes below. …


Image for post
Image for post

Look familiar? He is Sherlock Holmes with his magnifier to “zoom in” tiny details on the sculpture. In 21st century, computers are capable of 100x zoom that is much better than Holmes’s magnifier. As an engineer, my question is how computers zoom in or zoom out?

Geometric Transformation

Geometric transformation is to geometrically transform positions and orientation of an image to another position/orientation following a formula function. However, an image is nothing without colored pixels. Coloring a transformed image is a tough task for computers; hence, Computer Vision researchers leverage Pre-Calculus to computer inverse functions to map pixels of new images to…


Today is another Image Processing algorithm: Segmentation by Thresholding. Many of you properly are fans of Marvel’s or DC’s World of Superheros. Let’s look at a random scene of Superman:

Image for post
Image for post
URL: https://www.pinterest.com/pin/791507703239539817/

Yup! You are seeing the CGI technology. In this article, let’s say that you and I want to segment Superman out of the scene because we like so. In this article, let’s implement Thresholding by Quantization to segment the Superman.

Segmentation by Thresholding

Segmentation is a technique to remove objects out of background based on several approaches [1]:

  • Intensity-based Segmentation: Thresholding
  • Edge-based Segmentation
  • Region-based Segmentation

In this article, we focus on Intensity-based Segmentation…


Some of you reading this post have posted several pictures on Instagram and used built-in tools to increase/decrease brightness, contrast, gamma, and etc. to make your boring shots more beautiful, detailed, and attractive. I did several times, and I questioned myself how computers were able to do it. In this post, I address the Image Contrast and show one out of million methods outside there.

Let’s look at the below image

Image for post
Image for post
A pair of low-contrast and high-contrast images

In the left image, you can see a red/brown brick building, trees, and snow. And it looks boring and sad. In the right image, the colors look brighter? The…

Eric Ngo

Visiting AI Researcher @deepkaphaai | CS@UTDallas | Let’s connect: https://www.linkedin.com/in/dat-ngo-ab8b20148/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store