Global Journal of Computer Science and Technology, C: Software & Data Engineering, Volume 22 Issue 2
Figure 9: Tools of De Monitoring from the Consumption in Big Data Environment The upper graph (Sandbox Info) contains the evolution from the number of active users in the different sandboxes per minute in the last 5 days; for example, at that moment we have 13 sandboxes with 97 active users from a total of 310 users which could be connect in sometime; the Y axis are the different sandboxes. The lower graph (Home) shows in live, the cpus, memory and disk consumed in the Big Data environment; for example, the first box indicates 849 cores (cpus) from a total of 910, the second box indicates 285 TB used from a total of 364 TB, the third box indicates 1.18 TB consumed from a total of 17.47 TB. We understand that the Platform Validation, Deployment and Reliability and Research phases described above, belong to the qualities that must be achieved in a Big Data architecture, to this are added features such as considering a distributed data processing, being scalable both in software as in hardware, being available and for distributing a large volume of data; and mainly 3 stages for the implementation of the Big Data Architecture which must be aligned to the proposals in this present article: "Obtaining data from different sources", "Real-time data processing", "Analysis, visualization and decision making" . (Quiroz et al., 2019) III. R ESULTS As results from the integration, there are several projects working on this Big Data environment, which are productive with governed data, defined flows and with better response times in processes which were previously developed in traditional databases. With this, the business areas have more efficient products and services. This integration of the Big data environment is the starting point for shutting down the local systems / databases, which are a problem for the organization by causing dispersed and non-consensual information between the business areas which used these data. This is solved with centralized and governed data self-service sandboxes. Integration of the Big Data Environment in a Financial Sector Entity to Optimize Products, Services and Decision-Making Global Journal of Computer Science and Technology Volume XXII Issue II Version I 48 Year 2022 ( ) C © 2022 Global Journals
RkJQdWJsaXNoZXIy NTg4NDg=