Last month Rob High, the CTO of Watson, announced on his blog a new package that allows users to leverage some of Watson's tools from within the R framework.
In this post, we will install this package and use it to explore the ability for Watson to automatically recognize text within images (English only, for now). We will use it to determine if a complete abdominal US study has all of the required organs, and if not, to tell us which one(s) are missing.
The CognizeR package is available for download directly from Github. You will note that, unlike other packages we have used before, this one is not yet available in the CRAN repository and must be installed manually. Alternatively, as we will use here, the devtools package and CURL allow direct package installation from Github. The installation instructions are described in the Readme.md file.
IMAGE_API_KEY <- "********************"
save(IMAGE_API_KEY, file="key.Rdata")
install.packages("devtools")
install.packages("https://github.com/jeroenooms/curl/archive/master.tar.gz", repos = NULL)
library("devtools")
devtools::install_github("ColumbusCollaboratory/cognizer")
The Watson API for image recognition can analyze images in either jpg or png format. I extracted images from a complete abdominal US exam and manually removed all of the PHI to anonymize the images. You can download them here.
Let's look at an example image. To display an image in R we read it in using the png library (there is also a jpeg library) and display it using the grid.raster() function.
library("png")
library("grid")
image_text_path <- "./Images/"
image_list <- list.files(image_text_path, pattern="^US*", full.names=FALSE)
grid.raster(readPNG(paste(image_text_path,image_list[1], sep="")))
We can see a bunch of text on the image. There is also a black stripe across the top of the image where the patient's PHI originally appeared.
I spent some time playing with Watson's ability to extract the pertinent text from these US images prior to writing this post. Some aspect of the underlying algorithm appears to like black text on a white background better than the reverse. Therefore, before we start the text analysis we will flip all of the images.
for (i in 1:length(image_list)) {
raw <- readPNG(paste(image_text_path, image_list[i], sep=""))
writePNG(1-raw, target=paste(image_text_path, "Neg", image_list[i], sep=""))
}
image_list_neg <- list.files(image_text_path, pattern="Neg*", full.names=TRUE)
grid.raster(readPNG(image_list_neg[1]))
Now that we have everything setup correctly, we can run them through Watson to see what it detects in each. It is amazing that this API is so simple it requires only one line to interface with Watson's servers and get the analysis.
library("cognizer")
image_text <- image_detecttext(image_list_neg, IMAGE_API_KEY)
The output of Watson's algorithm is very organized but not in a human-friendly format. If we dig into the dataframe structure of the results for one of the images we find the data of interest under images and then words.
str(image_text[[1]])
image_text[[1]]$images$words
This shows us that Watson found 7 words in the image.
This output structure is identical to what we would receive if we used Python, Node.js or any other language to query Watson. The difference is that it is normally returned as a JSON object instead of this R data structure. The gritty details are listed on Watson's API page https://www.ibm.com/watson/developercloud/visual-recognition/api/v3/#recognize_text
In order to document completness for an abdominal ultrasound study, the following organs must be imaged:
We can loop through the text data returned by Watson to see if everything is present. First, define the list of terms we want to search for in the images. We also need to recognize that the terms are sometimes abbreviated. Of course, these search terms could be changed depending on the particulars of a radiology practice. We then create an array for each organ representing whether it has been found or not, initialized to all 0s.
terms <- list("liver",
c("gallbladder","gb"),
c("common bile duct","cbd"),
"pancreas",
"spleen",
"kidney",
c("aorta","ao"),
c("inferior vena cava","ivc"))
found <- rep(0, length(terms))
Now we cycle through the images to determine if any organ labels are there. We start by looping over each image. Within that loop, only search for the organs that have not been found in prior images (variable todo). Loop over our pre-defined terms for these organs and, if they are found in this image, set the found flag to 1.
for (im_text in image_text) {
todo <- which(found == 0)
if (length(todo) > 0) {
image_words <- im_text$images$words[[1]]$word
for (t in todo) {
organ <- terms[[t]]
result <- match(organ,image_words)
if (length(which(!is.na(result))) > 0) {
found[t] <- 1
}
}
} else {
break
}
}
Now we finish by checking if all the terms have been found, and if not, output an error and the missing organs.
notfound <- which(found == 0)
if (length(notfound) > 0) {
print("Exam not complete. Missing terms")
for (i in notfound) {
print(terms[[i]])
}
} else {
print("Complete Exam")
}
It seems that the aorta and inferior vena cava were not found. However, we can look at image 31, and the text associated with it, to see that the labels were there but Watson's image recognition was not able to separate them based on the way they were displayed on the image.
image_text[[31]]$images$words[[1]]$word
grid.raster(readPNG(paste(image_text_path,image_list[31], sep="")))
So how did Watson do? It correctly identified much of the text throughout the US images. We did have to invert them but that is trivial from a computational perspective.
However, Watson is looking for English words so some abbreviations are not recognized or recognized incorrectly. We saw this with the abbreviations for the aorta and inferior vena cava. Similar issues were seen with dates present in other images I fed to Watson. When this feature is expanded beyond English words I think it will be very accurate.
One important consideration is that the Bluemix APIs, as currently implemented, are not HIPAA compliant. Microsoft also has some artifical intelligence services with relatively easy access that do claim to be HIPAA compliant but I have not explored them yet.
In summary, this example has barely scratched the surface of the possibilities that Watson has to offer. It is exciting to have such a powerful tool integrated into the R framework. The ability to access cloud-based resources such as Watson will greatly expand the power available to data scientists.
Unfotunately, the text recognition feature showcased here has reverted to a closed beta status. The other aspects of the API, also compatible with this R plugin, continue to be available. It is not clear why the Bluemix team have chosen to move the text recognition back to closed beta but there are many possibile avenues for exploration with the other available functions.
paste("Author:", Sys.getenv("USER"))
paste("Last edited:", Sys.time())
R.version.string