Overview
Webhooks are an experimental feature in Slide Score. Please note that webhooks are in active development and breaking changes can come any time.
Key features of webhooks are the following:
-
Allow easy integration with custom 3rd party tools, for example for running a Machine Learning model on a user-specified Region Of Interest on the slide.
-
Give non-technical users the ability to run webhooks without needing to interact with command line tools. Including specificing custom parameters in the UI.
-
Provide a low-friction way to develop and test new functionality using a simple HTTP call.
A webhook can be written in any language, its only requirements is being able to respond to HTTP requests made by the Slide Score server. But since the webhook is likely interacting with Slide Score instance, it is recommended to use Python along with the slidescore sdk. If you use C# you can also use this example client in C# that includes SlideScoreClient.cs class that will help you make the API calls.
In order to configure and run a webhook there are 4 steps that need to be followed:
- A webhook server needs to be running on a location that can be reached by the Slide Score server.
- The webhook needs to be configured in the Study overview page, along with any needed questions/parameters.
- A trigger needs to be send from the Slide viewing page, where the user is asked any configured questions.
- The response from the webhook is shown to the user.
In order to get familiar with these steps it is recommend to follow the example given below.
Example
To get started with Slide Score webhooks we have provided an example in the slidescore-sdk: examples/webhook_slide_analysis.py
At the time of writing it contains the following code:
> Click to open the example webhook in python
DESC = """
Basic webhook example that uses opencv to find the dark parts in a Whole Slide Image
Date: 24-5-2024
Author: Bart Grosman & Jan Hudecek (SlideScore B.V.)
"""
from http.server import BaseHTTPRequestHandler, HTTPServer
import json
import argparse
import tempfile
import traceback
import slidescore
import cv2 # $ pip install opencv-python
import numpy as np # $ pip install numpy
def create_tmp_file(content: str, suffix='.tmp'):
"""Creates a temporary file, used for intermediate files"""
fd, name = tempfile.mkstemp(suffix)
if content:
with open(fd, 'w') as fh:
fh.write(content)
return name
def convert_polygons_2_anno2_uuid(polygons, client):
# Convert to anno2 zip, upload, and return uploaded anno2 uuid
local_anno2_path = create_tmp_file('', '.zip')
client.convert_to_anno2(polygons, '{"meta": "Dark polygons"}', local_anno2_path)
response = client.perform_request("CreateOrphanAnno2", {}, method="POST").json()
assert response["success"] is True
client.upload_using_token(local_anno2_path, response["uploadToken"])
return response["annoUUID"]
def convert_polygons_2_centroids(polygons):
centroids = []
for polygon in polygons:
sum_x = 0
sum_y = 0
for point in polygon['points']:
sum_x += point['x']
sum_y += point['y']
centroids.append({
"x": sum_x / len(polygon['points']),
"y": sum_y / len(polygon['points']),
})
return centroids
def convert_contours_2_polygons(contours, cur_img_dims, roi):
"""Converts OpenCV2 contours to AnnoShape Polygons format of SlideScore
Also needs the original img width and height to properly map the coordinates"""
x_factor = roi["size"]["x"] / cur_img_dims[0]
y_factor = roi["size"]["y"] / cur_img_dims[1]
x_offset = roi["corner"]["x"]
y_offset = roi["corner"]["y"]
polygons = []
for contour in contours:
points = []
for point in contour:
# The contours are based on a scaled down version of the image
# so translate these coordinates to coordinates of the original image
orig_x, orig_y = int(point[0][0]), int(point[0][1])
points.append({"x": x_offset + int(x_factor * orig_x), "y": y_offset + int(y_factor * orig_y)})
polygon = {
"type":"polygon",
"points": points
}
polygons.append(polygon)
return polygons
def threshold_image(client, image_id: int, rois: list):
# Extract pixel information by making a "screenshot" of each region of interest
polygons = []
for roi in rois:
if roi["corner"]["x"] is None or roi["corner"]["y"] is None:
continue # Basic validation
image_response = client.perform_request("GetScreenshot", {
"imageid": image_id,
"x": roi["corner"]["x"],
"y": roi["corner"]["y"],
"width": roi["size"]["x"],
"height": roi["size"]["y"],
"level": 15,
"showScalebar": "false"
}, method="GET")
jpeg_bytes = image_response.content
print("Retrieved image from server, performing analysis using OpenCV")
# Parse the returned JPEG using OpenCV, and extract the contours from it.
treshold = 220
jpeg_as_np = np.frombuffer(jpeg_bytes, dtype=np.uint8)
img = cv2.imdecode(jpeg_as_np, flags=1)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(img_gray, treshold, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
print("Performed local image analysis")
# Convert OpenCV2 contour to AnnoShape Polygons format of SlideScore
cur_img_dims = (img.shape[1], img.shape[0])
roi_polygons = convert_contours_2_polygons(contours, cur_img_dims, roi)
polygons += roi_polygons
print("Converted image analysis results to SlideScore annotation")
# AnnoShape polygons
return polygons
def get_rois(answers: list):
roi_json = next((answer["value"] for answer in answers if answer["name"] == "ROI"), None)
if roi_json is None:
raise Exception("Failed to find the ROI answer")
rois = json.loads(roi_json)
if len(rois) == 0:
raise Exception("No ROI given")
return rois
class ExampleAPIServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write(bytes("Hello world", "utf-8"))
def do_POST(self):
content_len = int(self.headers.get('Content-Length'))
if content_len < 10 or content_len > 4096:
self.send_response(400)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write(bytes("Invalid request", "utf-8"))
try:
post_body = self.rfile.read(content_len).decode()
request = json.loads(post_body)
"""
default http post payload:
"host": "${document.location.origin}",
"studyid": %STUDY_ID%,
"imageid": %IMAGE_ID%,
"imagename": "%IMAGE_NAME%",
"caseid": %CASE_ID%,
"casename": "%CASE_NAME%",
"email": "%USER_EMAIL%",
"webhookid": %WEBHOOK_ID%,
"webhookname": "%WEBHOOK_NAME%",
"answers": %ANSWERS%,
"apitoken": "%API_TOKEN%"
"""
host = request["host"]
study_id = int(request["studyid"])
image_id = int(request["imageid"])
imagename = request["imagename"]
case_id = int(request["imageid"])
email = request["email"]
webhook_id = int(request["webhookid"])
webhook_name = request["webhookname"]
case_name = request["casename"]
answers = request["answers"] # Answers to the questions field, needs to be validated to contain the expected vals
apitoken = request["apitoken"] # Api token that is generated on the fly for this request
rois = get_rois(answers) # Get Regions Of Interest
client = slidescore.APIClient(host, apitoken)
result_polygons = threshold_image(client, image_id, rois)
# [{type: "polygon", points: [{x: 1, y, 1}, ...]}]
request['apitoken'] = "HIDDEN"
print('Succesfully contoured image', request)
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.end_headers()
# Return an JSON array with a single result, A list of polygons surrounding the dark parts of the ROI.
self.wfile.write(bytes(json.dumps([{
"type": "polygons",
"name": "Dark parts",
"value": result_polygons
}, {
"type": "points",
"name": "Dark parts centroids",
"value": convert_polygons_2_centroids(result_polygons)
}, {
"type": "anno2",
"name": "anno2 dark polygons",
"value": convert_polygons_2_anno2_uuid(result_polygons, client)
}
]), "utf-8"))
# Give up token, cannot be used after this request
client.perform_request("GiveUpToken", {}, "POST")
except Exception as e:
print("Caught exception:", e)
print(traceback.format_exc())
print(post_body)
self.send_response(500)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write(bytes("Unknown error: " + str(e), "utf-8"))
if __name__ == "__main__":
parser = argparse.ArgumentParser(
prog='SlideScore openslide OOF detector API',
description=DESC)
parser.add_argument('--host', type=str, default='localhost', help='HOST to listen on')
parser.add_argument('--port', type=int, default=8000, help='PORT to listen on')
args = parser.parse_args()
webServer = HTTPServer((args.host, args.port), ExampleAPIServer)
print(f"Server started http://{args.host}:{args.port}, configure your slidescore instance with a default webhook pointing to this host.")
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
print("Server stopped.")
Example code explained
This example starts by running a HTTP handler and waiting for POST requests. Once it gets a POST requests, presumbly because a user triggered the webhook, it retrieves the parameters and does basic input validation.
Then a Region Of Interest that has been specified by the user is downloaded using the Slide Score API. It continues by finding dark regions in the downloaded image using the opencv
python library and thresholding.
Finally it converts the output of the opencv
library to all of the 3 formats that can be returned.
Configuration
Next to running the above python code on a publically accessible server, it needs to be configured in the Slide Score Study UI.
Please create a new test study and navigate to the Study Administration page, and select the Webhooks
tab, in between the Scoring Sheet
and the Users
tab. Then click the Add Webhook +
button and give your webhook a name, description and set the URL to the location of a server running the examply python code, i.e. http://localhost:8000
if you are running it on the same server. Please use HTTPS if you need to go over the internet.
For the questions string and the HTTP POST body click: Load example
to load an example questions sheet for the webhook parameters and an complete HTTP POST body with all the needed parameters.
Finally press the Save
button to add the webhook to this study.
Trigger the webhook
Please add a Slide image to the study and navigate to it's viewing page. If the webhook was successfully configured in the study, a Trigger webhook
button should be visible in the left sidebar. Please press it and observe the webhook pop-up.
In order to actually send the trigger, select the Region of Interest using the Start
button. If you are satisfied with your selection, press the Done
and Ready to send
buttons, and finally the Send trigger
button in the webhook popup.
Now a log should be shown below the Trigger Webhook
button with details of the progress of the webhook. After a few seconds the webhook should have finished and the results loaded in the front-end.
Now you can press any of the resulting annotations to view them and see the results of the webhook.
Troubleshooting
If the example code fails to run, the user should be alerted with the generated error. The webhook log in the left side bar could give additional hints as to the reason of failure.
If you suspect a bug or have trouble setting up the example, just send us an email and we would be glad to help.
Response
The HTTP timeout for webhooks is currently configured for 10 minutes.
If your would like to show a visual reponse to the user on triggering a webhook, then a JSON array is expected as a response containing one or many of 3 response types, polygons
, points
, or the more versatile anno2
:
[
{
type: 'polygons'
value: [{
{
type: 'polygon',
points: [{x: 1, y: 1}, {x: 100, y: 1}, {x: 100, y: 100}, {x: 1, y: 1}]
}
}],
name: 'Polygon result'
},
{
type: 'points'
value: [{x: 1, y: 1}, {x: 100, y: 1}, {x: 100, y: 100}, {x: 1, y: 1}],
name: 'Points result'
},
{
type: 'anno2'
value: 'a72a2644-37b9-4bb7-b69a-...',
name: 'Anno2 result'
},
]
Anno2
The anno2 format is better suited if: - Better performance is needed - You need to show a heatmap or mask - Caching of results on the SlideScore server is wanted
If you would like to use the anno2
response option, you need to have already uploaded the anno2 zip before the webhook response is returned. This can be done using the CreateOrphanAnno2
API method. In the example this is also used.
See more docs here.
Parameters
When configuring a webhook, an HTTP post body can be specified that will get send to the webhook. These can contain the following parameters:
Name | Type | Explanation |
---|---|---|
%STUDY_ID% | int | Numerical identifier of the study on which the webhook was triggered |
%IMAGE_ID% | int | Numerical identifier of the image |
%IMAGE_NAME% | string | Name of the image |
%CASE_ID% | int | Numerical identifier of the case |
%CASE_NAME% | string | Name of the case |
%USER_EMAIL% | string | Email of the user that triggered the webhook |
%WEBHOOK_ID% | int | Numerical identifier of the webhook |
%WEBHOOK_NAME% | string | Name of the webhook |
%ANSWERS% | array | JSON array of the answers to the questions of the questions_str |
%API_TOKEN% | string | On-the-fly generated API_TOKEN that is valid (3 hours) for the study in %STUDY_ID%, including getting pixels and setting scores. Should be given up when the webhook is done |
Questions string
In order to pass certain parameters to the webhook, a questions form can be specified that the user will be presented when triggering a webhook. These can contain a description of the webhook, a region of interest, or a selection for the model to be used.
The format of the questions string is the same as the output of the Questions Editor. Therefore you can simply create a Question Form in the Editor, and copy the link when you press the Share
button, and paste it in the questions string field.