In this tutorial we will setup running E2E tests in React Native app on build pipeline / AppCenter with Cavy framework. We will cover:
- what is Cavy and how to write E2E tests for React Native
- how to run Cavy tests in a build pipeline
- how to run iOS simulator / Android emulator in CI pipeline in MS Appcenter,
- intro to basic shell scripting
As a bonus, we will also cover:
- running Jest coverage tests and catching Jest coverage output to the file,
- collecting device logs (for iOS and Android) + also with logs from the JS side
- reporting: sending all zipping all these data and sending it to your mail using Sendgrid Mail API (the perfect option for nightly builds)
Hi there! I am Stefan Majiros, senior React Native Developer working at my mobile MVP development studio at stefan-majiros.com and you are reading my another tech blog post.
Cavy, Appium, Detox
To start, Cavy is an E2E testing framework. As you could read in Cavy's docs here:
"Cavy is a comparable tool to Appium. The key difference is that Appium uses native hooks to access components (accessibility IDs), whereas Cavy uses React Native refs. This means that Cavy sits directly within your React Native environment (working identically with both Android and iOS builds), making it easy to integrate into your application very quickly, without much overhead".
Also, installation and setup are identical for both iOS and Android in comparison to e.g. Detox. Cavy is really lightweight, and is much easier to set up and get running quickly - but on the other side, this also means, that as Cavy runs only in a JS environment, you could not interact with native components such as permissions dialogs for the camera, push notifications, etc yet.
So, if Cavy is a go-tool for you depends on what you need to test.
1-min Intro to Cavy
For example, even if you are using camera scanning in your app, there would be some kind of onScan/ onSuccessScan method in your camera component and you can still use Cavy for E2E tests. See 🚀 (rockets emojis) in the following code example below:
import React, {Component} from 'react'; | |
import {Dimensions, StyleSheet, TouchableOpacity, View} from 'react-native'; | |
import {RNCamera} from 'react-native-camera'; | |
import {launchImageLibrary} from 'react-native-image-picker'; | |
import {QRreader} from 'react-native-qr-decode-image-camera'; | |
import QRCodeScanner from 'react-native-qrcode-scanner'; | |
import Ionicon from 'react-native-vector-icons/Ionicons'; | |
export default class CameraScaningScreen extends Component { | |
scanner; | |
constructor(props) { | |
super(props); | |
this.state = { | |
cameraType: RNCamera.Constants.Type.front, | |
uri: '', | |
isCameraVisible: true, | |
torchOn: false, | |
}; | |
} | |
//🚀 this method calls business logic | |
onSuccessScan = e => { | |
console.log('Scanned', {e}); | |
if (this.props.onScan) { | |
console.log('calling parent', {e}); | |
this.props.onScan(e.data); | |
} | |
}; | |
render() { | |
const {height} = Dimensions.get('window'); | |
return ( | |
<View style={{flex: 1}}> | |
// 🚀 QRCodeScanner uses react-native-camera underhood | |
<QRCodeScanner | |
fadeIn={true} | |
ref={node => { | |
this.scanner = node; | |
}} | |
cameraProps={{ | |
androidCameraPermissionOptions: null, | |
checkAndroid6Permissions: false | |
}} | |
checkAndroid6Permissions={false} | |
cameraStyle={{flex: 1, height: height}} | |
// 🚀 | |
onRead={this.onSuccessScan} | |
showMarker={true} | |
flashMode={ | |
this.state.torchOn === false | |
? RNCamera.Constants.FlashMode.off | |
: RNCamera.Constants.FlashMode.torch | |
} | |
/> | |
</View> | |
); |
So if you need to test the flow that is executed by calling onScan / onSuccessScan method, you only need to:
- give parent component (CameraScaningScreen) Cavy's testID
- and then in test definition, use the same testID for accessing component and call onSuccessScan method directly (thanks to Cavy's findComponent method). See the example for findComponent code below:
import {TestScope} from 'cavy'; | |
export default function (scope: TestScope) { | |
// 🚀 | |
const screen = (await scope.findComponent('CameraScanningScreen')) as any; | |
await scope.pause(1000); | |
// 🚀 now you can call the method directly | |
screen.onSuccessScan({data: "vaccCY2x..."}); | |
} |
As you have already seen 1 React component above (that uses a camera to scan QR code and then calls custom business logic), just to give you a full & brief example of Cavy, here are two/three other files:
- on the left you can see 'container pattern' - it's the wrapper component for the presentational component (few lines above where). I added testID for Cavy and connected the store (actions, dispatches, etc) there
- on the right, you can see an example of the Cavy test spec

• Following is the content of testable.tsx: a reusable functional component that could be used across your app to inject Cavy testID's for components you need to mark. The benefits are that Cavy dependency stays at single place:
import {useCavy, wrap} from 'cavy'; | |
import React from 'react'; | |
export function testable<P>( | |
FunctionalComponent: React.FunctionComponent<P>, | |
): React.FunctionComponent<P & {testID?: string}> { | |
// if (!__DEV__) { | |
// return FunctionalComponent; | |
// } | |
const ClassComponent = wrap(FunctionalComponent); | |
return (props: P & {testID?: string}) => { | |
const generateTestHook = useCavy(); | |
return ( | |
<ClassComponent | |
{...props} | |
ref={props.testID == null ? undefined : generateTestHook(props.testID)} | |
/> | |
); | |
}; | |
} |
Running Cavy E2E tests
To run cavy tests, you should install "@types/cavy", "cavy-cli", and "cavy" from npm - but please, do not forget to check Cavy's docs for new updates, right here.
Cavy also allows to pass few specific command-line options:
- "--xml" - so Cavy will output test results in JUnit compliant format
- "-t" -to specify boot timeout / time until which tests should start
To run Cavy tests, I am using these commands in npm scripts (in package.json).
"e2e:ios": "cavy run-ios", | |
"e2e:android": "cavy run-android --verbose --xml" |
NOTE: I recommend running E2E Cavy tests on release variant (it's because when using debug variant, the app would not always crash on errors and therefore your E2E tests will not give you 100% assurance that the app truly works.
If you are using multiple environments/product flavors/schemes, you need to specify --variant/schemes/configuration option that will be passed to metro bundler like this:
//in package.json | |
{ | |
"e2e:ios": "cavy run-ios --scheme GreenPassIssuerProdRelease --configuration Prod.Release -t 10 --verbose", | |
// "e2e:iosDebug": "cavy run-ios --scheme GreenPassIssuerProdDebug --configuration Prod.Debug -t 10 --verbose", | |
"e2e:android": "cavy run-android --variant=prodRelease -t 2 --verbose --xml" | |
// "e2e:androidDebug": "cavy run-android --variant=prodDebug -t 2 --verbose --xml" | |
} |
Running Cavy E2E tests in CI / Appcenter
Now let's continue to write code that will launch Cavy E2E tests in Appcenter.
Pre-requisites: Basic shell knowledge would be beneficial, but if you don't feel like shell / bash pro, do not worry - I added few explanations.
Folder Structure
In order you would not get lost while reading this article, this is a folder structure that I used:
package.json
ios/
android/
src/
...
appcenter-pre-build.sh
scripts/
e2e.sh
cancelBuild.sh
coverageReport.sh
cavyResultsParser/
cavyResultsAsJson.js
package.json
CI Provider Limits
When using CI services, you should take a look at the limits or the maximal allowed duration of the build, as when executing E2E tests you could exceed the allowed time very easily.
When I am writing this. the maximal duration of each build is limited to 60 minutes in Azure DevOps, or in Appcenter - so you may need to create a new app in Appcenter - just for running these E2E tests.
When using a self-hosted CI server, build time duration would not be a limitation.
Code
I placed all the following code into appcenter-pre-build.sh file, so Appcenter will detect and run it when needed. Docs for other Appcenter build scripts are here.
Usually, if you use react-native-env-config to handle sensitive API keys / credentials / Firebase keys, you would put something like this in appcenter-pre-build.sh:
# Firebase Config using base64 | |
echo "Injecting secrets..." | |
echo "Updating Google JSON" | |
echo $GoogleServicesJson | base64 --decode >"$APPCENTER_SOURCE_DIRECTORY/android/app/google-services.json" | |
echo "Updating Google plist" | |
echo $GoogleServicesPlist | base64 --decode >"$APPCENTER_SOURCE_DIRECTORY/ios/GoogleService-Info.plist" | |
echo "Finished injecting secrets..." | |
echo "Google Services" | |
cat "$APPCENTER_SOURCE_DIRECTORY/ios/GoogleService-Info.plist" | |
echo "IsDev=${IsDev}" >>.env | |
echo "APPCENTER_BUILD_ID=${APPCENTER_BUILD_ID}" >>.env | |
echo "APPCENTER_BRANCH=${APPCENTER_BRANCH}" >>.env |
and then in Appcenter, you would use environment variables and populate them with sensitive data:

Because I used Android product flavors, there was small change necessary in android/app/build.gradle, as I was receiving error like "installProdRelease not found" during build process - I later found it's a result of invalid signing properties.
Although, Microsoft recommends to use this (link here):
android { | |
signingConfigs { | |
releaseSigningConfig { | |
storeFile rootProject.file("app/testapp.jks") | |
storePassword System.getProperty("APPCENTER_KEYSTORE_PASSWORD") | |
keyAlias System.getProperty("APPCENTER_KEY_ALIAS") | |
keyPassword System.getProperty("APPCENTER_KEY_PASSWORD") | |
} | |
} | |
buildTypes { | |
release { | |
signingConfig signingConfigs.releaseSigningConfig | |
} | |
} | |
} |
When I added Gradle logger into the end of my app/build.gradle file:
... | |
// Run this once to be able to run the application with BUCK | |
// puts all compile dependencies into folder libs for BUCK to use | |
task copyDownloadableDepsToLibs(type: Copy) { | |
from configurations.compile | |
into 'libs' | |
} | |
// put this in the end of file | |
// logging Gradle Variables - Microsoft approach gives null | |
project.logger.lifecycle("ENV" + System.getProperty('APPCENTER_KEYSTORE_PASSWORD')) | |
project.logger.lifecycle("ENV" + System.getProperty('APPCENTER_KEY_PASSWORD')) | |
// logging Gradle Variables - when signing details are added as environment variables without APPCENTER_ prefix, all is fine | |
project.logger.lifecycle("ENV" + System.getenv('KEYSTORE_PASSWORD')) | |
project.logger.lifecycle("ENV" + System.getenv('KEY_PASSWORD')) | |
apply from: file("../../node_modules/@react-native-community/cli-platform-android/native_modules.gradle"); applyNativeModulesAppBuildGradle(project) |
I always got printed "null" which led to an invalid signing and that strange error above.
To solve that issue, I:
- put KeyStore file into app code (I did not put Keystore into Appcenter UI),
- put Keystore information into environment variables into Appcenter ( KEYSTORE_PASSWORD, KEY_PASSWORD, and KEY_ALIAS in the Appcenter pic above) and not into Gradle section
- I used System.getenv instead of System.getProperty in android/app/build.gradle
NOTE: Also, if you use any Firebase services, I recommend you to use base64 encoding for GoogleServices.json file as there is no need to escaping the JSON file.
Skipping E2E Tests For Some Branches
However, the thing is that whenever you add appcenter-pre-build.sh file into your React Native project, this script will be executed upon each build and branch.
When you need to skip running E2E tests in some branch for whatever reason, you can always write the following code to read environment variable (e.g. RUN_E2E_TESTS) in appcenter-pre-build.sh build script.
Then, if for example, RUN_E2E_TESTS is not set for a given branch, you can exit the script gracefully and Appcenter will proceed to build your app as usual (without running E2E tests).
if [ -z ${RUN_E2E_TESTS+x} ]; then | |
echo "E2E are not turned on for this build." | |
exit 0 // calling exit with 0 will skip further execution of script | |
// also, exit 1 is similar to exception | |
fi |
If you are not familiar with this syntax, -z is asking if the RUN_E2E_TESTS variable is zero (or undefined in terms of javascript). Just notice, we also did variable expansion using ${} syntax, so the condition will evaluate to true even when somebody set RUN_E2E_TESTS to a string of "false".
Replacing babelrc
Then, if you wish to read React Native logs from the terminal and you are using babel-plugin-transform-remove-console to remove console logs from production builds of your React Native app, you would need to turn this off.
The simplest way I found to this, is to create a copy of babelrc file named e.g babelrc_E2E and remove the inside part that tells to run babel-plugin-transform-remove-console plugin:
//"production": { | |
// "plugins": [ | |
// "transform-remove-console" | |
// ] | |
//} |
And then, when running E2E tests, you would just replace these files using the following code that you could add to your script (in appcenter-pre-build.sh):
# replace babelrc content, to output logs from React Native in console | |
mv .babelrc .babelOriginal | |
mv .babelrc_E2E .babelrc |
Running Jest Coverage Tests
The next step is running Jest tests in coverage mode. Although Jest offers to pass "--json" parameter that will print Jest output in JSON format, by doing that you would lose this beautiful summary table below. So instead of using "-json" parameter, we will just redirect jest output into a file using '>' bash operator:
echo "Starting coverage report" | |
npm run test:coverage > jestOutput.txt | |
// command above will call this command in package.json | |
"test:coverage": "jest --coverage --detectOpenHandles --silent ", |
Then, after running jest coverage tests, we would have this table / jest coverage results in jestOutput.txt file:

Calling other scripts
NOTE: If you want to place a command to run coverage reports into the separate script, you should call it using the "source" command, so all variables from the calling script will be shared:
echo "Starting coverage report" | |
source "${APPCENTER_SOURCE_DIRECTORY}/scripts/coverageReport.sh" |
Running E2E Cavy Tests in CI pipeline: AppCenter
Now comes the part about running E2E tests with Cavy in build pipelines.
I use the code below to run E2E tests on a nightly basis. If you are interested in how to run Appcenter builds at the scheduled time (e.g. at night), please take a look at my another blog post here:
To run Cavy E2E tests for React Native in Appcenter pipeline, I put all logic into a new file called e2e.sh. The skeleton of this code is pretty straightforward and in general looks like this:
if [ -n "$APPCENTER_ANDROID_VARIANT" ]; then | |
echo "Setup & boot Android simulator" | |
.....some stuff | |
npm run e2e:android | |
else | |
echo "Setup & boot iOS Simulator" | |
....some stuff | |
npm run e2e:ios -- --udid "$SIMULATOR_ID" | |
fi |
Now, let's talk more about that "some stuff" part. We will start with Android.
How To Boot Android Emulator in CI Build Pipeline To Run E2E tests
We need to boot the Android emulator first. Let's start with defining some constants first in the e2e.sh file:
SIMULATOR_IMAGE="system-images;android-28;google_apis;x86" | |
SIMULATOR_NAME="Pixel_XL_API_28" |
Finding simulator image name could be a little tricky (as it might change during the time), so this is how you will output all available Android simulator images:
android list sdk --use-sdk-wrapper | |
# or if you want to get only simulator names | |
android list sdk --use-sdk-wrapper | grep system-images -w | |
# with redirecting output to file | |
android list sdk --use-sdk-wrapper | grep system-images -w > tmp2.txt |
After pressing enter, the commands above give you the following output to choose simulator name:

Then we need to modify PATH, to make sure all Android SDK commands are available:
ANDROID_HOME=~/Library/Android/sdk | |
PATH="$ANDROID_HOME/emulator:$ANDROID_HOME/tools:$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools:$PATH" |
Then we need to accept licenses, and download the simulator image that we need (by using simulator name from previous steps):
echo "Accepts all sdk licences" | |
yes | sdkmanager --licenses | |
echo "Download Simulator Image" | |
sdkmanager --install "$SIMULATOR_IMAGE" |
After downloading the image, we can create an Android emulator using the following command:
echo "no" | avdmanager --verbose create avd --force --name "$SIMULATOR_NAME" --device "pixel" --package "$SIMULATOR_IMAGE" --tag "google_apis" --abi "x86" |
To boot the Android emulator, we need to use the command below and pass --no-window option together with "&" in the end (this allows the emulator to run in "background"). We will also need to store PID (process ID), so we can kill the emulator after E2E tests are finished, and the CI server will not keep hanging and wasting resources (money):
# Start the emulator | |
emulator -avd "$SIMULATOR_NAME" -no-snapshot -no-snapshot-load -verbose -noaudio -wipe-data -no-window -gpu swiftshader_indirect & | |
#you can also pass locale settings | |
-prop persist.sys.language=en -prop persist.sys.country=GB | |
# store PID of last executed process in the shell | |
EMULATOR_PID=$! |
To wait until emulator is booted, we run this command:
WAIT_CMD="$ANDROID_HOME/platform-tools/adb wait-for-device shell getprop init.svc.bootanim" | |
until $WAIT_CMD | grep -m 1 stopped; do | |
echo "Waiting until emulator is booted..." | |
sleep 1 | |
done |
Then, we just unlock screen, to be sure we can run our app:
# Unlock the Lock Screen | |
$ANDROID_HOME/platform-tools/adb shell input keyevent 82 |
Do not worry, we are reaching to end!
The last command that you might want to execute before running Cavy tests is to clear logcat and redirect its output into a file (so if Cavy tests fails, you can analyze all logs from device). We also store the PID of logcat process that was launched in the background too by using "&":
echo "Capturing logcat" | |
$ANDROID_HOME/platform-tools/adb logcat -c | |
$ANDROID_HOME/platform-tools/adb logcat >./logcat.log & | |
LOGCAT_PID=$! |
Now we can run a command that will build the React Native app and execute Cavy's tests. In my case I need to run "npm run e2e:android" that translates to npm script of "e2e:android": "cavy run-android".
Well, that's all for Android.
iOS
iOS is more simple. For brevity, I put all explanations into comments (hope you will understand what's going on).
The following script will boot iPhone 12 simulator running iOS 14.4 in the build pipeline on Appcenter to execute E2E tests written in Cavy:
GRACE_TIME=80 | |
APP_ID=org.stefanmajiros.bluepass.prod | |
echo "Setup iOS Simulator" | |
echo "Installing iOS Simulator dependencies" | |
brew tap wix/brew | |
brew update | |
brew install applesimutils | |
echo "Installing pods" | |
cd ./ios && /usr/local/lib/ruby/gems/2.7.0/bin/pod install --repo-update && cd .. | |
SIMULATOR_ID=$(xcrun simctl create My-iphone12 com.apple.CoreSimulator.SimDeviceType.iPhone-12 com.apple.CoreSimulator.SimRuntime.iOS-14-4) | |
echo "Running $SIMULATOR_ID" | |
echo "Booting simulator $SIMULATOR_ID" | |
xcrun simctl boot "$SIMULATOR_ID" | |
echo "Waiting ${GRACE_TIME}s to let it boot the new simulator" | |
for i in $(seq 1 $GRACE_TIME); do echo -n '.'; sleep 1; done; echo " OK." | |
echo "Starting logging service" | |
xcrun simctl spawn booted log stream --debug --predicate 'subsystem == "$APP_ID"' > deviceLog.log & | |
# echo application logs from React Native to separate file | |
xcrun simctl spawn booted log stream --debug --predicate 'subsystem == "com.facebook.react.log"' > reactLog.log & | |
echo "Starting E2E tests" | |
npm run e2e:ios -- --udid "$SIMULATOR_ID" | |
xcrun simctl shutdown booted | |
xcrun simctl delete "$SIMULATOR_ID" |
Final code for e2e.sh (Android + iOS)
Here is the final code for e2e.sh file, which is responsible for launching the Android emulator / iOS simulator in the build pipeline on Appcenter, and runs E2E tests using Cavy:
#!/usr/bin/env bash | |
echo "Running E2E tests" | |
# ( setopt posixbuiltin; set; ) | less | cat | |
# -n if not null | |
if [ -n "$APPCENTER_ANDROID_VARIANT" ]; then | |
echo "Setup Android simulator" | |
SIMULATOR_IMAGE="system-images;android-28;google_apis;x86" | |
SIMULATOR_NAME="Pixel_XL_API_28" | |
ANDROID_HOME=~/Library/Android/sdk | |
# ANDROID_SDK_ROOT=~/Library/Android/sdk | |
# ANDROID_AVD_HOME=~/.android/avd | |
PATH="$ANDROID_HOME/emulator:$ANDROID_HOME/tools:$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools:$PATH" | |
echo "Accepts all sdk licences" | |
yes | sdkmanager --licenses | |
# touch /Users/runner/.android/repositories.cfg | |
echo "Download Simulator Image" | |
sdkmanager --install "$SIMULATOR_IMAGE" | |
echo "Create Simulator '$SIMULATOR_NAME' with image '$SIMULATOR_IMAGE'" | |
echo "no" | avdmanager --verbose create avd --force --name "$SIMULATOR_NAME" --device "pixel" --package "$SIMULATOR_IMAGE" --tag "google_apis" --abi "x86" | |
# echo no | /Users/runner/Library/Android/sdk/tools/bin/avdmanager create avd -n "$SIMULATOR_NAME" -d pixel --package "$SIMULATOR_IMAGE" | |
# emulator -list-avds | |
emulator -avd "$SIMULATOR_NAME" -no-snapshot -no-snapshot-load -verbose -noaudio -wipe-data -no-window -gpu swiftshader_indirect & | |
#Start the emulator | |
EMULATOR_PID=$! | |
# wait until emulator starts | |
WAIT_CMD="$ANDROID_HOME/platform-tools/adb wait-for-device shell getprop init.svc.bootanim" | |
until $WAIT_CMD | grep -m 1 stopped; do | |
echo "Waiting until emulator is booted..." | |
sleep 1 | |
done | |
# debug for analytics | |
adb shell setprop debug.firebase.analytics.app org.stefanmajiros.bluepass | |
adb shell setprop log.tag.FA VERBOSE | |
# Unlock the Lock Screen | |
$ANDROID_HOME/platform-tools/adb shell input keyevent 82 | |
# Clear and capture logcat, before running tests | |
echo "Capturing logcat" | |
$ANDROID_HOME/platform-tools/adb logcat -c | |
$ANDROID_HOME/platform-tools/adb logcat >./logcat.log & | |
LOGCAT_PID=$! | |
echo "Running Android E2E tests" | |
npm run e2e:android | |
if [ $? -eq 0 ]; then | |
echo "e2e SUCCESS" | |
# Stop the background processes | |
kill $LOGCAT_PID | |
kill $EMULATOR_PID | |
return 0 | |
else | |
echo "e2e ERROR" | |
kill $LOGCAT_PID | |
kill $EMULATOR_PID | |
return 1 | |
fi | |
else | |
GRACE_TIME=80 | |
echo "Setup iOS Simulator" | |
echo "Installing iOS Simulator dependencies" | |
brew tap wix/brew | |
brew update | |
brew install applesimutils | |
echo "Installing pods" | |
cd ./ios && /usr/local/lib/ruby/gems/2.7.0/bin/pod install --repo-update && cd .. | |
SIMULATOR_ID=$(xcrun simctl create My-iphone12 com.apple.CoreSimulator.SimDeviceType.iPhone-12 com.apple.CoreSimulator.SimRuntime.iOS-14-4) | |
echo "Running $SIMULATOR_ID" | |
echo "Booting simulator $SIMULATOR_ID" | |
xcrun simctl boot "$SIMULATOR_ID" | |
echo "Waiting ${GRACE_TIME}s to let it boot the new simulator" | |
for i in $(seq 1 $GRACE_TIME); do echo -n '.'; sleep 1; done; echo " OK." | |
echo "Starting logging service" | |
xcrun simctl spawn booted log stream --debug --predicate 'subsystem == "org.stefanmajiros.bluepass.prod"' > deviceLog.log & | |
# echo application logs from React Native to separate file | |
xcrun simctl spawn booted log stream --debug --predicate 'subsystem == "com.facebook.react.log"' > reactLog.log & | |
echo "Starting E2E tests" | |
npm run e2e:ios -- --udid "$SIMULATOR_ID" | |
xcrun simctl shutdown booted | |
xcrun simctl delete "$SIMULATOR_ID" | |
fi |
Preparation for Sendgrid
Now, after we could boot the device in CI pipeline to run E2E tests, you may see that several files are created when Cavy tests finish execution:
- reactLog.log
- deviceLog.log
- cavyResults.xml
We will zip these files, and send them as attachments in an email of your choice using the cURL command and SendGrid API.
This is the screenshot of the SendGrid email in my Gmail - an example of what we want to accomplish:

Let's take a look at cavyResults.xml. They are written in JUnit compliant format:

The thing is, that as SendGrid API supports JSON only, we would need to convert Cavy XML results to JSON. So, here is the code in NodeJS:
/* | |
IMPORTANT NOTE: Do not use console.log in script below, | |
as the all logged values will be returned to calling shell script. | |
*/ | |
const xml2js = require('xml2js'); | |
const yargs = require('yargs'); | |
const argv = yargs | |
.command('convertXmlReport', 'Converts XML JUNIT report from cavy to JSON', { | |
data: { | |
description: 'JUNIT REPORT in XML to convert', | |
alias: 'fromXml', | |
type: 'string', | |
}, | |
}) | |
.option('xmlreport', { | |
alias: 'xmlreport', | |
description: 'XML report in JUNIT spec format', | |
type: 'string', | |
}) | |
.option('subject', { | |
alias: 'subject', | |
description: 'subject - in email', | |
type: 'string', | |
}) | |
.option('appname', { | |
alias: 'appname', | |
description: 'app name', | |
type: 'string', | |
}) | |
.demand('convertXmlReport') | |
.demandOption('xmlreport') | |
.demandOption('appname') | |
.demandOption('subject') | |
.help() | |
.version('0.0.1') | |
.alias('help', 'h').argv; | |
// argv._ is an array containing each element not attached to an option(or flag) these elements are referred as commands in yargs. | |
// if (argv._.includes('data')) + we can access options directly | |
const xml = argv.xmlreport; | |
const subject = argv.subject; | |
const appname = argv.appname; | |
xml2js.parseString( | |
xml, | |
{mergeAttrs: true, explicitArray: false}, | |
(err, result) => { | |
if (err) { | |
console.log(err); | |
process.stdout.write(-1); | |
} | |
let mappedResult = result.testsuite; | |
mappedResult.testcase = mappedResult.testcase.map(c => ({ | |
...c, | |
passed: c.name.includes('works') && !c.failure ? true : false, | |
})); | |
mappedResult.appName = appname; | |
mappedResult.subject = subject; | |
let mappedJSON = JSON.stringify(mappedResult, null, 4); | |
process.stdout.write(mappedJSON); | |
}, | |
); |
We will run this NodeJS script from appcenter-pre-build.sh where we
- first, convert XML (cavy test results) to JSON (due Sendgrid requirements)
- then zip log files (above) and convert it to base64 (as Sendgrid requires that all attachments in the email must be base64 encoded)
- then, we will store the base64 string (containing email attachment) in shell variable:
cavyResults=$(<./cavy_results.xml) | |
# read output from stdOut | |
templateDataString=$(node ./scripts/cavyParser/cavyResultsAsJson.js --convertXmlReport --xmlreport="$cavyResults" --subject="$SUBJECT" --appname="$APP_NAME") | |
# remove quotes from the start and the end, so we can embed it in into existing Sendgrid JSON request | |
templateData="${templateDataString%\"}" | |
templateData="${templateData#\"}" | |
echo "zipping attachements" | |
zip -r e2e-report.zip ./deviceLog.log ./reactLog.log ./cavy_results.xml ./jestOutput.txt | |
base64attachment=$(base64 ./e2e-report.zip) |
Sending E2E Test Reports in Email With SendGrid
Now, we will need to create an email template in Sendgrid - I believe you can create Sendgrid account on your own (or you already did) - yes, it's free.
Here is the welcome screen you will see after creating your account:

The email template that we want to create, is in Sendgrid's world, called dynamic template - because when calling SendGrid API, we will specify templateID and pass some dynamic data for customization / personalization of the email.
Just to save you a few minutes, here is a quick walkthrough on creating a new dynamic email template in Sendgrid:

In short, you probably need to use "Blank Template" (or existing email design), then choose "Code Editor" and paste HTML code for your email. You should also remember templateID, as we will use it later in our cURL call to SendGrid.
Besides using pre-defined email templates offered by Sendgrid, you can also develop your own template - SendGrid uses Handlebars for templating.
Developing Custom Sendgrid Template
Using the Test-Data tab, you can see a preview of changes in email template (and input data) in real-time. Input in Test-Data tab accepts objects in JSON for email "personalization".
Here is an example (btw, data on the left is the output of NodeJS script that converts Cavy XML results into JSON, shared few lines above):

NOTE: If you want to use this design, feel free to find it here.
Verifying Sender
After you create your own email template, you would also need to verify one email address or web domain as a Sendgrid sender (you can also use @gmail addresses).
Sender Verification can be done in the Settings / Sender Authentication Tab in Sendgrid console:

The last thing with Sendgrid, is making sure, you have API keys. If not, you can create a new Sendgrid API key here:

After we created Sendgrid account, dynamic template, API key and verified one email address we can continue to write code that calls Sendgrid's API.
Here we will create a SendGrid request, output it into sendGridRequest.json file, put proper SendGrid API key, and just call Sendgrid API using the cURL command:
generate_sendgrid_post_data() { | |
cat <<EOF | |
{ | |
"personalizations": | |
[ | |
{ | |
"to": [{"email": "$TO_MAIL"}], | |
"dynamic_template_data": $templateData | |
} | |
], | |
"from": {"email": "$FROM_MAIL"}, | |
"subject": "$SUBJECT", | |
"template_id": "$SENDGRID_TEMPLATE_ID", | |
"attachments": [ | |
{ | |
"content": "$base64attachment", | |
"filename": "e2e-report.zip" | |
} | |
] | |
} | |
EOF | |
} | |
echo "creating request" | |
echo $(generate_sendgrid_post_data) > sendgridRequest.json | |
cat sendgridRequest.json | |
# 5. sendmail | |
echo "Sending report using mail" | |
curl -iv --request POST \ | |
--url https://api.sendgrid.com/v3/mail/send \ | |
--header "Authorization: Bearer $SENDGRID_API_KEY" \ | |
--header 'Content-Type: application/json' \ | |
-d @sendgridRequest.json | |
echo "Report sent" |
By doing this, you should receive a new email in your mailbox (specified in TO_MAIL env variable) in just a few seconds after calling that script.
Canceling Appcenter Build
Finally, after you sent the E2E report to your email, you may want to cancel Appcenter build, so it will not try to build an app for publishing into stores.
It's again very simple, as Appcenter has its own API. We need to add this cURL command at the end of "appcenter-pre-build.sh" file:
echo "Cancelling Appcenter Build" | |
echo $API_TOKEN | |
echo $APP_NAME | |
echo $APPCENTER_BUILD_ID | |
curl -iv "https://appcenter.ms/api/v0.1/apps/$OWNER_NAME/$APP_NAME/builds/$APPCENTER_BUILD_ID" \ | |
-X PATCH \ | |
-d "{\"status\":\"cancelling\"}" \ | |
--header 'Content-Type: application/json' \ | |
--header "X-API-Token: $API_TOKEN" |
NOTE: API token for MS Appcenter can be created in your Account Settings:

Here is the final code for appcenter-pre-build.sh with some corner cases checks (e.g. if tests fail):
# #!/usr/bin/env bash | |
# mocking variables, when running locally | |
FROM_MAIL=business.stefanmajiros@gmail.com | |
ORG=majirosstefan | |
TO_MAIL=$RecipientMail | |
APP_NAME=$AppName | |
SENDGRID_TEMPLATE_ID=$SendgridTemplateId | |
BUILD_URL=https://appcenter.ms/orgs/$ORG/apps/$APP_NAME/build/branches/$APPCENTER_BRANCH/builds/$APPCENTER_BUILD_ID | |
# Firebase Config using base64 | |
echo "Injecting secrets..." | |
echo "Updating Google JSON" | |
echo $GoogleServicesJson | base64 --decode >"$APPCENTER_SOURCE_DIRECTORY/android/app/google-services.json" | |
echo "Updating Google plist" | |
echo $GoogleServicesPlist | base64 --decode >"$APPCENTER_SOURCE_DIRECTORY/ios/GoogleService-Info.plist" | |
echo "Finished injecting secrets..." | |
echo "Google Services" | |
cat "$APPCENTER_SOURCE_DIRECTORY/ios/GoogleService-Info.plist" | |
echo "IsDev=${IsDev}" >>.env | |
echo "APPCENTER_BUILD_ID=${APPCENTER_BUILD_ID}" >>.env | |
echo "APPCENTER_BRANCH=${APPCENTER_BRANCH}" >>.env | |
# printf "\n.env created with contents:\n" | |
# cat .env | |
# 0. if is zero or empty string, move to build step | |
if [ -z ${RUN_E2E_TESTS+x} ]; then | |
echo "E2E are not turned on for this build." | |
exit 0 | |
fi | |
# 0.1 replace babelrc content, to keep logs from React Native | |
mv .babelrc .babelOriginal | |
mv .babelrc_E2E .babelrc | |
SUBJECT="Bluepass iOS: E2E Results" | |
if [ -n "$APPCENTER_ANDROID_VARIANT" ]; then | |
SUBJECT="Bluepass Android: E2E Results" | |
fi | |
generate_dynamic_template_data_failed_tests() { | |
cat <<EOF | |
{ | |
"name": "cavy", | |
"appName":"$APP_NAME", | |
"buildUrl":"$BUILD_URL", | |
"testsFailedToRun": true, | |
"subject": "$SUBJECT", | |
"timestamp": "$(date +%Y-%m-%dT%H:%M:%S%z)" | |
} | |
EOF | |
} | |
# # 2. run test coverage report produces ./jestOutput | |
echo "Starting coverage report" | |
source "${APPCENTER_SOURCE_DIRECTORY}/scripts/coverageReport.sh" | |
# 1. run E2E tests | |
echo "Starting E2E tests with cavy" | |
base64attachment=AA | |
source "${APPCENTER_SOURCE_DIRECTORY}/scripts/e2e.sh" | |
FILE=./cavy_results.xml | |
if test -f "$FILE"; then | |
# 3. parse cavy results, pass file content as single string using echo "`cat ../cavy_results.xml`" or command below | |
cavyResults=$(<./cavy_results.xml) | |
# read output from stdOut | |
templateDataString=$(node ./scripts/cavyParser/cavyResultsAsJson.js --convertXmlReport --xmlreport="$cavyResults" --subject="$SUBJECT" --appname="$APP_NAME") | |
# remove quotes from the start and the end, so we can embed it in into existing Sendgrid JSON request | |
templateData="${templateDataString%\"}" | |
templateData="${templateData#\"}" | |
echo "zipping attachements" | |
zip -r e2e-report.zip ./deviceLog.log ./reactLog.log ./cavy_results.xml ./jestOutput.txt | |
# or ATTACHEMENT= cat e2e-report.zip | base64 | |
# zip -r coverage-report.zip "${APPCENTER_SOURCE_DIRECTORY}/coverage" | |
# using mail.local ends in spam: uuencode "${APPCENTER_SOURCE_DIRECTORY}/jestOutput.txt" jestOutput.txt | mail -s "${SUBJECT} - Success!" ${TO_ADDRESS} | |
base64attachment=$(base64 ./e2e-report.zip) | |
# echo "$templateData" | |
else | |
echo "E2E tests failed" | |
SUBJECT="E2E tests failed" | |
templateData=$(generate_dynamic_template_data_failed_tests) | |
echo "$templateData" | |
echo "zipping attachements" | |
zip -r e2e-report.zip ./deviceLog.log ./reactLog.log ./jestOutput.txt # or ATTACHEMENT= cat e2e-report.zip | base64 | |
# zip -r coverage-report.zip "${APPCENTER_SOURCE_DIRECTORY}/coverage" | |
# using mail.local ends in spam: uuencode "${APPCENTER_SOURCE_DIRECTORY}/jestOutput.txt" jestOutput.txt | mail -s "${SUBJECT} - Success!" ${TO_ADDRESS} | |
base64attachment=$(base64 ./e2e-report.zip) | |
fi | |
generate_sendgrid_post_data() { | |
cat <<EOF | |
{ | |
"personalizations": | |
[ | |
{ | |
"to": [{"email": "$TO_MAIL"}], | |
"dynamic_template_data": $templateData | |
} | |
], | |
"from": {"email": "$FROM_MAIL"}, | |
"subject": "$SUBJECT", | |
"template_id": "$SENDGRID_TEMPLATE_ID", | |
"attachments": [ | |
{ | |
"content": "$base64attachment", | |
"filename": "e2e-report.zip" | |
} | |
] | |
} | |
EOF | |
} | |
echo "creating request" | |
echo $(generate_sendgrid_post_data) > sendgridRequest.json | |
cat sendgridRequest.json | |
# 5. sendmail | |
echo "Sending report using mail" | |
curl -iv --request POST \ | |
--url https://api.sendgrid.com/v3/mail/send \ | |
--header "Authorization: Bearer $SENDGRID_API_KEY" \ | |
--header 'Content-Type: application/json' \ | |
-d @sendgridRequest.json | |
echo "Report sent" | |
echo "Cancelling build" | |
source "${APPCENTER_SOURCE_DIRECTORY}/scripts/cancelBuild.sh" |
Final words
Also, I must say, that I used patch-package (check it out, it's very easy to use it) where I customized Cavy's behavior. I added --allTestRunTimeout CLI param so when Cavy tests cause app crash, CI will not keep hanging. Here is the patch file that you can copy and use with patch-package:
diff --git a/node_modules/cavy-cli/cavy.js b/node_modules/cavy-cli/cavy.js | |
index ad2c7e2..8dcc2f1 100755 | |
--- a/node_modules/cavy-cli/cavy.js | |
+++ b/node_modules/cavy-cli/cavy.js | |
@@ -34,7 +34,9 @@ function test(cmd) { | |
const outputAsXml = cmd.xml; | |
const dev = cmd.dev; | |
const bootTimeout = cmd.bootTimeout; | |
- runTests(commandName, entryFile, skipbuild, dev, outputAsXml, bootTimeout, args); | |
+ const allTestRunTimeout = cmd.allTestRunTimeout; | |
+ | |
+ runTests(commandName, entryFile, skipbuild, dev, outputAsXml, bootTimeout, allTestRunTimeout, args); | |
} | |
// Stop quitting unless we want to | |
@@ -63,6 +65,11 @@ program | |
'Set how long the CLI should wait for the RN app to boot ' | |
+ '(is ignored if used with --skipbuild, defaults to 2 minutes, requires Cavy 4.0.0)' | |
) | |
+ .option( | |
+ '-t, --allTestRunTimeout <minutes>', | |
+ 'Set how long the CLI should wait for the RN app for finishing tests ' | |
+ + '(defaults to 2 minutes, requires Cavy 4.0.0)' | |
+ ) | |
.option('--xml', 'Write out test results to cavy_results.xml (requires Cavy 3.3.0)') | |
.allowUnknownOption() | |
.action(cmd => test(cmd)); | |
@@ -81,6 +88,12 @@ program | |
'Set how long the CLI should wait for the RN app to boot ' | |
+ '(is ignored if used with --skipbuild, defaults to 2 minutes, requires Cavy 4.0.0)' | |
) | |
+ .option( | |
+ '-t, --allTestRunTimeout <minutes>', | |
+ 'Set how long the CLI should wait for the RN app for finishing tests ' | |
+ + '(defaults to 2 minutes, requires Cavy 4.0.0)' | |
+ ) | |
+ | |
.option('--xml', 'Write out test results to cavy_results.xml (requires Cavy 3.3.0)') | |
.allowUnknownOption() | |
.action(cmd => test(cmd)); | |
diff --git a/node_modules/cavy-cli/server.js b/node_modules/cavy-cli/server.js | |
index 64faf22..5ad60e8 100644 | |
--- a/node_modules/cavy-cli/server.js | |
+++ b/node_modules/cavy-cli/server.js | |
@@ -9,7 +9,8 @@ const server = http.createServer(); | |
// Setup local variables for server | |
server.locals = { | |
appBooted: false, | |
- testCount: 0 | |
+ testCount: 0, | |
+ testFinished: false | |
}; | |
// Initialize a WebSocket Server instance | |
@@ -27,16 +28,27 @@ wss.on('connection', socket => { | |
case 'singleResult': | |
logTestResult(json.data); | |
break; | |
- case 'testingComplete': | |
+ case 'testingComplete':{ | |
+ server.locals.testFinished = true; | |
finishTesting(json.data); | |
break; | |
+ } | |
+ default: { | |
+ console.log(json.event); | |
+ } | |
} | |
}); | |
+ // socket.onerror = console.error; | |
+ | |
// Now we have made a connection with Cavy, we know the app has booted. | |
server.locals.appBooted = true; | |
}) | |
+wss.on("error", (err) => { | |
+ console.log("Error:",err) | |
+}) | |
+ | |
// Internal: Takes a count and string, returns formatted and pluralized string. | |
// e.g. countString(5, 'failure') => '5 failures' | |
// countString(1, 'failure') => '1 failure' | |
diff --git a/node_modules/cavy-cli/src/runTests.js b/node_modules/cavy-cli/src/runTests.js | |
index 14b4cb1..a880189 100644 | |
--- a/node_modules/cavy-cli/src/runTests.js | |
+++ b/node_modules/cavy-cli/src/runTests.js | |
@@ -7,6 +7,7 @@ const { spawn, execFileSync } = require('child_process'); | |
// Default boot timeout in minutes | |
const BOOT_TIMEOUT = 2; | |
+const ALL_TESTS_RUN_TIMEOUT = 2; | |
let switched = false; | |
@@ -44,7 +45,7 @@ function runAdbReverse() { | |
console.log(`cavy: Running ${adbPath} ${adbArgs.join(' ')}`); | |
execFileSync(adbPath, adbArgs, {stdio: 'inherit'}); | |
} catch(e) { | |
- console.error(`Could not run adb reverse: ${e.message}.`); | |
+ console.log(`Could not run adb reverse: ${e.message}.`); | |
process.exit(1); | |
} | |
} | |
@@ -57,7 +58,7 @@ function getAdbPath() { | |
} | |
// Start test server, listening for test results to be posted. | |
-function runServer({ command, dev, outputAsXml, skipbuild, bootTimeout }) { | |
+function runServer({ command, dev, outputAsXml, skipbuild, bootTimeout, allTestRunTimeout }) { | |
server.locals.dev = dev; | |
server.locals.outputAsXml = outputAsXml; | |
server.listen(8082, () => { | |
@@ -73,6 +74,8 @@ function runServer({ command, dev, outputAsXml, skipbuild, bootTimeout }) { | |
} else { | |
// bootTimeout defaults to two minutes | |
const timeout = bootTimeout || BOOT_TIMEOUT; | |
+ const testsTimeout = allTestRunTimeout || ALL_TESTS_RUN_TIMEOUT | |
+ console.log("Setting timeout to", timeout, new Date().toISOString()) | |
setTimeout(() => { | |
if (!server.locals.appBooted) { | |
console.log(`No response from Cavy within ${timeout} minutes.`); | |
@@ -81,6 +84,16 @@ function runServer({ command, dev, outputAsXml, skipbuild, bootTimeout }) { | |
} | |
// Convert bootTimeout to milliseconds | |
}, minsToMillisecs(timeout)); | |
+ setTimeout(() => { | |
+ if (!server.locals.testFinished) { | |
+ console.log(`No response from Cavy within ${allTestRunTimeout} minutes.`); | |
+ console.log('Terminating processes.'); | |
+ process.exit(1); | |
+ } else { | |
+ console.log("Finished") | |
+ } | |
+ // Convert bootTimeout to milliseconds | |
+ }, minsToMillisecs(testsTimeout)); | |
} | |
}); | |
} | |
@@ -93,7 +106,7 @@ function runServer({ command, dev, outputAsXml, skipbuild, bootTimeout }) { | |
// outputAsXml: whether to write and save the results to XML file | |
// bootTimeout: how long the CLI should wait for the RN app to boot. | |
// args: any extra arguments the user would usually to pass to `react native run...` | |
-function runTests(command, file, skipbuild, dev, outputAsXml, bootTimeout, args) { | |
+function runTests(command, file, skipbuild, dev, outputAsXml, bootTimeout, allTestRunTimeout, args) { | |
// Assume entry file is 'index.js' if user doesn't supply one. | |
const entryFile = file || 'index.js'; | |
@@ -134,14 +147,15 @@ function runTests(command, file, skipbuild, dev, outputAsXml, bootTimeout, args) | |
}); | |
if (skipbuild) { | |
- runServer({ command, dev, outputAsXml, skipbuild, bootTimeout }); | |
+ runServer({ command, dev, outputAsXml, skipbuild, bootTimeout, allTestRunTimeout}); | |
} else { | |
// Build the app, start the test server and wait for results. | |
console.log(`cavy: Running \`react-native ${command}\`...`); | |
let rn = spawn('react-native', [command, ...args], { | |
stdio: 'inherit', | |
- shell: true | |
+ shell: true, | |
+ | |
}); | |
// Wait for the app to build first... | |
@@ -151,7 +165,8 @@ function runTests(command, file, skipbuild, dev, outputAsXml, bootTimeout, args) | |
if (code) { | |
return process.exit(code); | |
} | |
- runServer({ command, dev, outputAsXml, skipbuild, bootTimeout }); | |
+ console.log("cavy: running tests") | |
+ runServer({ command, dev, outputAsXml, skipbuild, bootTimeout, allTestRunTimeout }); | |
}); | |
} | |
} |
And yes, I acknowledge that some parts could be refactored - now you know all the dots for running E2E tests on Appcenter, which was the main aim of this article.