Global Journal of Computer Science and Technology, C: Software & Data Engineering, Volume 22 Issue 2

the capillary. This is shown in Figure 2e and the original image in Figure 2f. The number of pixels within the encapsulated black contour line is summed up and divided by the total number of pixels resulting in the value of the capillary density. Bounding boxes are formed around the predicted bounding boxes using the OpenCV contour method [62]. These bounding boxes are then passed to the CNN for prediction. The RoIs predicted as capillaries have a green bounding box around them and a black line to highlight the capillary shape. This is shown in Figure 2c and Figure 2d. The number of pixels within the encapsulated black contour line is summed Fig. 3: A Breakdown of the Building Blocks Used to Built the Proposed System Fig. 4: The data flow view of how the driver process coordinates with the driver process and the workers up and divided by the total number of pixels resulting in the value of the capillary density. The CNN consisted of three blocks of Conv2D. The first Conv2D consisted of 32 filters, the second Conv2D consisted of 64 filters, and the third Conv2D consisted of 128 filters with a block of Maxpooling2D. All the Conv2D blocks have a filter of 3x3 shape. Two dense layers of 128 neurons follow the Conv2D blocks, 64 neurons, and two neurons. The Rectified Linear Unit (ReLu) [63] activation function is used for the whole network except the last neuron layer, which used a softmax activation function [64]. This network has been trained on 11,000 images of capillaries captured by trained professionals in a clinical settin g 1 b) The Parallel System part of the Proposed System 1 . The details and the specificity of the algorithms and data used to train the algorithm can be found in a previous paper by the same authors [13]. This architecture has two types of nodes: the worker nodes and a head node. A worker node consists of the worker process(es), the scheduler, and the object-store. A worker node and a head node anatomy are shown in Figure 3 and the data flow within the components is shown in Figure 4. A worker process encapsulates the code to be executed and is responsible for task submission and execution of tasks. In our system, the worker node encapsulates the deep learning algorithms. It receives the image to be analyzed and replies whether this image contains a capillary (blood vessel) or not. The scheduler is the resource manager of the worker node. The object store stores and transfers object larger than 100KB. The head node has a Global Control Store (GCS) and a driver process. The GCS is a key-value server that contains objects, actors, and tasks. The driver process submits tasks to the scheduler and keeps track of the objects created with all the nodes. When the code is initiated, an instance of a head node is created. The maximum number of worker processes within this head node is based on the number of parallel modules in the architecture instantiated and the maximum number of cores. Each worker performs both stages: suggesting RoIs and detecting capillaries using the CNN loaded. Each worker returns a single object that contains the frame’s density value and is stored in the object-store. The code execution of this architecture is scheduled using the scheduler, and the tasks are performed over a general-purpose Remote Procedure call to the worker processes on top of the Python interpreter. The scheduler then communicates the results via an object transfer protocol. For error handling and fault tolerance, the scheduler retries executing it on the worker processor, if a task fails due to a worker process ending unexpectedly. Thus, one of the main differences between the proposed system and the baseline parallel system is that the former uses a driver process to manage the workers while the latter uses a controller and a router to manage the worker’s tasks. A baseline parallel system uses some controller and router to prevent the worker’s potential overloading with tasks, which can cause it to fail. However, these two components (controller and router) can occupy up to two cores for the management of the workers without performing any code execution. Capillary X: A Software Design Pattern for Analyzing Medical Images in Real-time using Deep Learning Global Journal of Computer Science and Technology Volume XXII Issue II Version I 18 Year 2022 ( ) C © 2022 Global Journals 1 The sponsoring company provided a device that was used to capture this data. ∼

RkJQdWJsaXNoZXIy NTg4NDg=