Setting up a continuous integration environment involves a few steps
that differ depending on the used technology but generally the following
pattern can be recognized.
1) A build is triggered automatically when new code is committed to repository
2) During the build code is compiled and unit tests are run
3) If everything goes well a new distributable is made
4) New software version is automatically deployed to a staging environment
5) Automated regression UI tests are executed against the new software version
Finally there’s the actual release phase that you do as often as it fits
for your project.
Majority of software projects never have UI regression testing automated.
Software is often released fingers crossed – it is barely tried out at all
prior releasing. This leads to trivially observable bugs in production
and angry customers.
We have implemented all CI steps by integrating a set of SaaS services.
Our goal was to build and test a new software version when changes are made,
and report results back to developers as soon as possible. The feedback
time is critical. With fast feedback, regression is easier to understand
When we make code changes, we push them to Bitbucket. A commit hook then
triggers our build process that is executed in Codeship. Codeship compiles
our code and runs unit tests. After success, Codeship deploys new
version to our staging server and sends signal to Usetrace to start
automated regression testing against the user interface.
Every step is reported in team chat. We’re using HipChat. Regression
test results are reported in real time. Each failing test is reported
immediately, and once the test suite has completed, a summary
is given. This allows us to notice and fix problems even before
the whole CI cycle has completed.
Our CI cycle currently takes around six minutes. Most of the time is spent
in the compilation phase. Regression testing stands currently at two minutes:
we have automated six most business critical test cases and are running them
on our supported browsers: latest versions of Chrome and Firefox. So we have
total of 12 automated UI tests run in under 120 seconds. On average, one
test runs in ten seconds.
Finally, before we do a release, we check the current software performance from Usetrace dashboard. Up-to-date facts about the business critical features’ performance make the decision of whether or not to release less stressful.
Here’s the whole picture as a diagram: four SaaS developer tools make the whole
Disclaimer: Usetrace is a SaaS web testing solution providing a cross-browser platform to create, maintain and run UI tests and have automated testing integrated in the CI process.