My testing task for this sprint was to create a Regression test suite to test one our services “SRC:CLR Console” which is standalone agent that leverages the service core technology to identify known security vulnerabilities in software components.
The goals I decided to achieve were:
- Create a set of automated repeatable steps to ensure the SRC:CLR Console would be configurable by our users.
- Ensure the console functioned as expected; meaning it can scan both local projects on disk, and scan from supported SCM (GH, GHE, Stash) systems.
I started by creating the set of test cases to get proper testing coverage done. The next step was to get test cases automated and executing with CI Jenkins.
To better understand the test case:
"Verify SRC:CLR console can be configured correctly".
Test steps are:
- Download the latest console version
- Start the console
- Enter passphrase
- Unlock the configuration file
- Enter passphrase
- Set your SRC:CLR access token
- Scan the project
The terminal command will be this:
$ SRC:CLR CONSOLE OUTPUT: > Welcome to SRC:CLR > Existing config found, please enter your passphrase to unlock it.** > Passphrase: $ JENKINS INPUT: > Passphrase: ****** $ SRC:CLR CONSOLE OUTPUT: > SRC:CLR > $ JENKINS INPUT: > SRC:CLR > config unlock $ SRC:CLR CONSOLE OUTPUT: > Passphrase: $ JENKINS INPUT: > Passphrase: ******** $ SRC:CLR CONSOLE OUTPUT: > SRC:CLR > $ JENKINS INPUT: > Console :SRC:CLR >conf set --apiUrl https://api.srcclr.com
Jenkins is not an interactive application. It is designed for automated execution which is why this implementation would be a challenging task.
The solution I came up with was to use a Python
subprocess module which will allow Jenkins to spawn a new process, connect to the input/output/error pipes, and obtain return codes as a test results.
To get started you need to add Python Plugin to Jenkins which would be configured as a new job to get the latest code and build the project. The next step was to start the console as a subprocess of Jenkins job with passed arguments to the Popen constructor:
p = subprocess.Popen( [java_home_loc, "-jar", jar_loc], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False, universal_newlines=True, preexec_fn=os.setsid )
I would strongly discourage use
shell=True before reading Security Considerations
The stdout, stderr, stdin I found to be very helpful for interactive scripting while setting up the console, and getting console output as part of my testing. Here, I am writing to stdin and reading the output from stdout.
outQueue = Queue() errQueue = Queue() outThread = Thread(target=enqueue_output, args=(p.stdout, outQueue)) errThread = Thread(target=enqueue_output, args=(p.stderr, errQueue)) outThread.start() errThread.start() #setup entries here p.stdin.write("passphrase\n\n") p.stdin.write("conf unlock\n\n")
Quick tip: this script will be running by Jenkins user, so you need to make sure that the script has correct access rights.
One of the challenges to get this setup working was to ensure some corner cases.
The possible problem with Deadlock state was resolved adding a function to getOutput() which will wait to get all output before returning immediately. In this case, your script will never have an empty queue, and consequently never be locked into a Deadlock state.
defgetOutput(outQueue): outStr = ''try: whileTrue: #Adds the output from the Queue until it is empty outStr+=outQueue.get_nowait() except Empty: return outStr
During the second part of my testing running console scans of different projects, I had to ensure Jenkins console would display scanned repos and could handle large results with all possible failures or errors.
My solution to get the interactive output is still valid.
p.stdin.write("scan scm --repoUrl %s\n\n" % repo_loc) getOutput(outQueue) output = ""while True: o = getOutput(outQueue) sys.stdout.flush() output += oiflen(output.split('\n')) > 10and"SRC:CLR >" in '\n'.join(output.split('\n')[1:]): sys.stdout.flush() breakelse: sleep(3) os.killpg(p.pid, signal.SIGTERM) outThread.join() errThread.join()
I had to add the
sys.stdout.flush() to flush the buffer that will limit the size of the buffer in memory; this is the normal way to handle pipelined commands.
The complicated part was to implement subprocess failure. Failures should trigger the Jenkins job to stop and cancel gracefully. In case of an error in console subprocess I used
os.killpg(p.pid, signal.SIGTERM) to kill the python script. This would trigger the Jenkins job to send TERM to the process group of the processes it spawned and immediately disconnects, reporting "Finished: ABORTED," regardless of the state of the job.
I collected all the scan results in the buffer so Jenkins could display the test results for all of the scanned repos after the console had completed the scans.
for repo in list_of_repos: r = check_repo(args.java_home_loc, args.jar_loc, repo) r = '\n'.join(i for i in r.split('\n')[:-2] if i) results.append('%s\nResut:\n%s\n\n' % (repo, r))