-
Notifications
You must be signed in to change notification settings - Fork 10
Understanding SeBAz.py
This page explains what SeBAz.py does
To start off, SeBAz.py is the entry point into all the scripts that make the tool function. Hence, this script takes care of
-
Arguments given during program call
-
Calling the functions as per the user's needs
-
Display progress bar and result on the terminal
-
Write findings into CSV
-
Generate report
-
parserfrom argumentsParser to deal with the options given by the user during program call -
time gmtime localtimefrom time to calculate how long the program ran -
get_recommendationsfrom optionsParser.py to get the list of control ID's as specified in the user's script call -
disp_expfrom optionsParser.py to display the explanations of the controls -
generatePDFfrom reportGenerator.py to generate report(s) from existing CSV(s) -
geteuidfrom os to ensure that the program exits if not called on as root -
get_managerfrom enlighten to enable enlighten to control the terminal space to display progress bar and text properly -
systemfrom os to call the clear command and to make the logs directory -
bold red green yellowfrom huepy to print colored text on the terminal -
pathfrom os to manage the path of the CSV, Report and log files -
writerfrom csv to write findings into the spreadsheet -
ThreadPoolExecutorfrom concurrent.futures to enable concurrent testing of controls using MultiThreading -
testfrom benchmarks.py to actually perform the tests -
repeatfrom itertools as the map function requires a list/tuple to be passed as argument, and test requires some variable arguments to be sent as well -
createPDFfrom reportGenerator.py to create the report in the current run -
Imagefrom fabulous.image to print the logo on the terminal
-
optionsstores the options that was given as argument during program call. For more information, read this wiki -
startstores the program start timestamp -
gmt_timestores the program start time in GMT -
localstores the program start time in the local time zone -
recommendationsstores the list of the recommendations that the user wants to test in this run -
managerstores the enlighten terminal manager -
file_pathstores absolute path to the CSV that is created for this run -
lengthstores the total number of controls that the user wants to check in this run -
scorestores the score of the system (Only those recommendations that are SCORED and have PASSED) -
passedstores the count of all the recommendations that have PASSED irrespective of whether they are scored or not -
passdis the enlighten progress bar manager that counts the number of controls that have passed -
faildis the enlighten progress bar manager that counts the number of controls that have failed -
checkis the enlighten progress bar manager that counts the number of controls that needs to be checked by the user -
log_fileis the path of the log directory -
resultsstores all the results of the tests that were performed. results is a 2-D list that consists of the following- first element is the score. 2 if the control passed and is scored, 1 if the control passed and is not scored, and 0 if the control failed [or] needs to be checked
- second element is the result that needs to be written into the CSV. For more information, read this wiki
-
durationstores the time taken to execute the program as a string -
resultstores the result of the benchmark as a string -
endstores the total time taken to execute the program as a float