We’ve devised an automated pipeline to ensure that our printer is in good shape on a continuous basis. On a regular day, we have developers modifying code in many areas: from cloud applications to the web UI down to the firmware. We try to strike a fine balance between ensuring that we have good test coverage with having a quick turnover time for getting our code onto the lab printers. Our internal print lab contains about 40 printers that are churning through prints constantly. We have materials engineers fabricating new resins, print researchers trying to model print processes, sales folks printing potential customer models, hardware engineers creating new products, and customer service troubleshooting customer prints. All those people end up being our testers for our ever-changing software! We hear about issues that people experience while printing and are able to turn around fixes in a couple days. A release branch ends up living in the lab for over a month before that new software is released to customers. This way, customers can have a great, (hopefully) bug-free experience with new software.
While the lab is a great help to us in testing, we also don’t want to hinder all the productive discoveries that are going on there. We have a number of automated processes in place that significantly reduce the risk of unstable software before the software is even deployed to the lab. Our number one priority is that the printer still prints fine. If there’s a UI glitch, that’s less disruptive in our lab environment.
Our automated CI pipeline looks like this:
- Automated pre-commit tests are run, based on code dependency graph closure
- Build and test (unit-tests and integration tests)
- Deployment to a real printer
- Airprint (a print without resin). We check various things during the airprint:
- No alerts (software and hardware)
- Print finished to completion with no errors
- Projected slices are correct
- Print is able to abort
- Printer is power cycled and we make sure the printer comes up with no issues
- Every night, the latest version of software that passes the above steps gets deployed onto several printers in the lab.
- If any of the above steps fail, the pipeline stops and an email is sent to the suspected culprits. If there is an error, fixing the build becomes the highest priority for that developer. We do sometimes have flaky tests – those also become high priority to fix so our pipeline can continue to run smoothly. We’ve found that having our software go through the above steps minimizes issues found in the lab, especially those that are print-related.
We release new software to customers about every six weeks with new features and improvements to printing. Because the software is being updated constantly in the lab, we have the luxury of to-be-released software operating in our lab for several weeks before customers receive it. This allows us to release software to customers with high confidence of its reliability and quality.
Below is a snapshot of our Jenkins build status dashboard. We have several jobs that handle building and testing various branches as well as different software packages.
At Carbon we have a large set of integration tests that start up a bunch of services (and monitors) and then run tests on them, using python and unittest. Since we need these services to be running during all of our tests, we put the code to start and stop them inside unittest.TestCase’s setUpClass and tearDownClass, which are run once before/after you run a unittest.TestCase:
Over time, we built up a large suite of integration tests to the point where it became unwieldy to keep them all in single python class/file. However, breaking this up into multiple IntegrationTest subclasses would require running the setUpClass and tearDownClass once per subclass, which would increase our test runtime by a nontrivial amount. To get the best of both worlds (starting/stopping all processes once, but also having modular integration test code), we use two steps: simple test discovery to find all of our integration tests, and creating a single TestCase at runtime that will run all of our defined tests in each file.
Test discovery is as simple as iterating through the relevant files and looking for IntegrationTest instances:
Once you have a list of tests, we can use type ( ) to dynamically subclass all of our IntegrationTests:
which will run all of your tests without starting and stopping processes for each test file you have!
SLICE COMPARISON TEST
Producing the slices for the 3D model is an important step in our printing process. The printer prints by projecting a UV slice image that cures the resin at the window. You can imagine these slices are what we produce when we cut a 3D part into slices, like bread.
To ensure that we don’t mess up printing, we have a test that will take two versions of the code and compare the slices it outputs for a certain set of STLs. The test goes through each slice png (pixel by pixel) produced by the old version of software and compares it with the same corresponding slice png of the newer version of software. The test produces a spreadsheet that will list out each slice and how many pixels were added, removed, or changed in grayscale, as well as the overall percent difference of the whole slice. It also will produce a “diff slice” png, in which the pixels of the slice are colored green (if a pixel was added), red (if a pixel was removed), or blue (if the pixel changed in grayscale). You can see an example diff slice in the picture above.
Every night, we run this test on a small set of STLs that cover edgecases or have given us slicing problems in the past. The test is also run on a much larger set of STLs, created from all the STLs customers have given us permission to use, from release to release. Every time changes are detected, we meet to understand the differences. If the changes weren’t intentional or do not improving overall printability of parts, we may decide to revert or fix the change.