Global Journal of Computer Science and Technology, C: Software & Data Engineering, Volume 22 Issue 2

While the proposed system does not reserve any cores to manage the drivers but rather re-executes the code if a worker fails [65]. When the code is instantiated in our proposed system, the worker node loads the CNN as a Tensorflow model. Each worker occupies a logical processor, thread, or core, depending on the CPU architecture; we assume it is a core and instantiate a worker per core. As the number of cores increased, the number of images processed in parallel increased with the number of cores. We have shown that by combining the deep neural network part with the parallel part, we can process several images at the same time, suggest RoIs and predict whether the bounding boxes have a capillary or not. Furthermore, the number of frames processed in parallel is determined by the maximum number of cores available or the pre-defined the value inserted by the user (assuming it does exceed the number of cores available). IV. I mplementation Many programming languages can implement a parallel processing framework. Python is the fastest- growing programming language [66], [67] and the preferred programming language for deep learning with Tensorflow [68], [69]. This popularity stems from its design philosophy, where it emphasizes readability and simplicity [66]. Moreover, the number of libraries, various tools, and speedily expanding the industrial community supporting Python made the language attractive [70]. Thus, the proposed package was built on top of Python 3.7 [22], OpenCV 4.5.2[57], Scikit-learn 0.18[71], Ray 1.2[26] and Tensorflow 2.3[35]. The coding and evaluation were done in Pycharm Professional 2021.1 on a Windows 10 operating system. The system can be installed, modified, and used by following the instructions in the readme file on the Github repository (www.github.com/magedhelmy1/CCGRID 2022 parallel system for image analysis). To use the system, the user can clone the package from the Github repository and import it in their Python environment. V. E valuation and D iscussion In this section, we compare the baseline serial architecture, the baseline parallel architecture, and the proposed system with each other using the following three metrics: execution time, speedup, and CPU usage. We show that the proposed Python system is 78% faster than the serial system and 12% faster than the baseline parallel architecture. These three metrics are standardized markers to quantify a system performance [72]. We use these three metrics to compare our proposed approach to a serial and parallel system with the same deep neural network. We show that the proposed system meets the requirements mentioned in Section I and supersedes both the baseline serial system and the baseline parallel system in execution time, speedup, and CPU usage. The proposed system, serial counterpart, and parallel counterpart had the same CNN model and microcirculation images. We evaluated the three systems by taking the average time to calculate capillary density per image for a set of 100 images, which is an arbitrary number we chose to reduce the margin of error and ensure our calculations’ accuracy. a) Execution Time To calculate how much one architecture was faster compared to the other, we used Equation 1, where ET denotes execution time. (1) 1. Baseline serial architecture — one second per frame; 2. Baseline parallel architecture — 0.25s per frame; and 3. The proposed system — 0.22s per frame. The average SlowerET − FasterET SlowerET = % Faster Capillary X: A Software Design Pattern for Analyzing Medical Images in Real-time using Deep Learning Global Journal of Computer Science and Technology Volume XXII Issue II Version I 19 Year 2022 ( ) C © 2022 Global Journals Fig. 5: The execution time of the proposed system against the baseline serial system and the baseline parallel system The execution time metric measures the average time needed to calculate a single image’s capillary density. We used 100 images in each architecture to reduce the measurement error margin. The execution time of each architecture is the following:

RkJQdWJsaXNoZXIy NTg4NDg=