Article Type
Article
Abstract
The paper's subject is one of the points of fundamental significance in improving the development of complex computer systems is the effect of errors since their probability of occurrence in a widely distributed system is exceptionally high. Thus, the computer system must be dependable and provide comprehensive support for fault tolerance, fault detection, signal recovery mechanisms, failure prediction, etc. The goal is to develop a way to evaluate the reliability of the Red Hat Linux operating system utilizing data sets of system failures obtained from system injection with error reports using the Fault Injection Benchmarking and Reliability) FIBR( (for a data set of system failures). It allows users to inject various faults into a system to simulate failures and analyse their impact on system reliability. First, identify the type and frequency of failures observed in the data set. This can be done by analysing the error reports generated by the FIBR tool. The categorization of the failures is based on their severity and impact on system performance by assigning a score or rank to each failure based on its severity and impact on system performance. This can be done using statistical methods like failure distribution analysis or reliability block diagrams. Finally, use the reliability model to calculate key reliability metrics, such as mean time between failures (MTBF) and failure rate. These metrics can be used to evaluate the overall reliability of the Red Hat Linux operating system under different scenarios and conditions. Moreover, compare the results obtained from the reliability analysis with industry standards and benchmarks to determine if the Red Hat Linux operating system's reliability meets the required performance level for the intended use case. The method is a technique founded on two reliability evaluation models: defect count and failure prediction. An Operatingreliability evaluation technique based on two models: defect count and failure prediction. In the defect count model, operating system defects or errors are counted and used to estimate the operating system's reliability. This is done by analysing the number of defects found during the functional testing. This information is used to predict the number of defects that may occur in the future. The defect count model assumes that the number of defects in the operating system is proportional to its size and complexity and that reducing defects leads to increased reliability. The defect count model provides a quantitative estimate of the number of defects in the operating system. Results: During several attempts to measure and assess the reliability (for total run time: 900 sec., 240 sec., 120 sec., 60 sec., and 30 sec.), it was discovered that the greater the number of bugs injected in a single period, the greater the likelihood of failure situation, and increasing total run time with the probability increased failure states as well
Keywords
Network Operating System, NOS, UNIX, LINUX, Convolution Neural Network, CNN, Fault Count Models
Recommended Citation
Hasoon, Jamal N.; Qasim, Sarah S.; Alshomali, Mohammed A.; and Stephan, Jane J.
(2023)
"Reliability Assessment Model of Network Operating System Based on Convolution Neural Network,"
Al-Esraa University College Journal for Engineering Sciences: Vol. 5:
Iss.
7, Article 13.
DOI: https://doi.org/10.70080/2790-7732.1041
Included in
Biomedical Engineering and Bioengineering Commons, Chemical Engineering Commons, Civil and Environmental Engineering Commons, Computer Engineering Commons, Materials Science and Engineering Commons, Mechanical Engineering Commons